The Modern Misinformation Crisis: Origins and Outcomes
By Hannah Webb
17 minute read · Written May 31, 2024 · Published June 29, 2025
Originally written as an Oaks Christian High School capstone project
Introduction: A Misalignment
Truth is like the common thread that weaves society together–an essential yet elusive principle of human interaction and institutional function: an author must cite reliable sources to provide credibility to his or her argument, the court of law is structured to use truthful accounts to determine consequences, and a marriage hinges on a mutual basis of trust to sustain itself. Thus, truth is an instrument that provides mankind with the ability to draw distinct boundaries in various veins of his life: academically, civically, relationally, etc. These objective frameworks pave the way for human advancement, whether by using the accumulated knowledge of biology to improve medicine, unfiltered public discussion to develop a deeper understanding of one another, or even authentic relationships to sustain and grow society.
Yet, at the same time, man is impulsive and occasionally irrational. It is the moment that his impulsivity overcomes his truth-seeking, that misinformation crises manifest. The earliest forms of these crises date back to biblical times and have perpetuated throughout the history of the West. Yet, regardless of what these crises surround, they all originate from the same fundamental problem: sacrificing integrity of information to obtain a personal advantage or desire. Additionally, examining these crises reveals the sheer magnitude of destruction that truth breaches can cause—shattered relationships, false accusations, and the loss of innocent lives.
Today, the spread of misinformation has evolved, only instead, on an unprecedentedly large scale. While new technological systems like artificial intelligence enhance the efficiency of man’s daily life, they have altered the way humans typically absorb and filter through information, promoting the instantaneous spread of false headlines, commentary, deep fakes, and various forms of unregulated, AI-generated content. In the face of a technology he is yet to understand how to use, truth has become incredibly elusive to the modern man, altering his core identities: misinformation fueled by AI is diminishing the integrity of academia and man’s ability to hold productive democratic discussion. Plus, chatbots are beginning to blur the lines between authentic and simulated human interaction, gamifying the connection humans desperately need.
In a word, historically, misinformation crises illustrate that man’s impulsive nature leads to the erosion of truth. As AI technology fuels man’s impulsivity, new cracks are forming in his identity as a civilian, scholar, and connector, revealing that one of man’s fundamental purposes in life is to seek truth. As a species that advances on the very basis of truth-seeking, what will happen to human progress when machine learning models continue to nurture man’s impulsive nature? To maximize the benefits reaped from human innovation and AI technologies, man must recognize the misalignment between the teleological function of AI and the inherent purpose that lies within himself, adjusting accordingly.
Absolutism vs Relativism
Before delving into misinformation crises, it is important to grasp the nature of truth itself, so one can understand exactly which definition of truth is being violated. Philosophically, there are two main ways to approach truth’s nature in the world: absolutely or relatively. Greek philosopher Plato is an absolutist, meaning that he embraces the idea that a truth beyond man’s lived experience is definite and confirmed. Conversely, relativists argue that no truth is known for certain, but that it fluctuates over time as the world changes. Whereas absolutists focus on the metaphysical in their understanding of reality, relativists believe that man-made constructs such as power, wealth, and reputation should be taken into account in understanding the truths of societies.
Thus, the two opposing views of truth stand in stark contrast to one another. Plato focuses on the idea that there is a constant reality in the world that is unchanging no matter how man evolves, and relativists rely on the idea that reality changes as a result of the actions of mankind. Relativists believe that there is no steady philosophical footing for mankind, but only a truth defined by the group in the present moment. The view that man progresses by compounding a series of truths over time assumes absolutism–the existence of inflexible universals. Therefore, the tracing of misinformation over time will acknowledge that the truths violated are not relative to the period, but rather absolute.
Subjective vs Objective Truth
Observing truth on a smaller scale than an overarching philosophical framework requires defining the specific type of truth under discussion. According to Plato, there are two types of truths: objective and subjective. He teaches that objective truths are understandings of the world that are irrefutable–that hold a direct correspondence to reality, and subjective truths, are opinions humans form based on individual experiences. Plato’s Allegory of the Cave illustrates subjective truths as the mere shadows within a cave people observe, shadows they are limited to seeing due to being chained to the ground and unable to turn their heads. What casts these shadows are real objects, light from the outside world projecting a distorted shape of them into the inside of the cave.
The allegory is representative of man’s flawed senses that deceive him in his subjective perception of the world. Plato illustrates this subjectivity in contrast to an external world that the prisoners are unaware of–a world full of real objects and knowledge that can only be discovered only through intellectual inquiry, or what he calls the pursuit of the “Idea of the Good.” When discussing how truth has progressed, the argument will predominantly focus on the evolution of objective truths rather than subjective truths, focusing on universal truths and structures that are not as variable as Plato’s definition of subjective truths.
Theories of Truth
When discussing the implications of a misinformation crisis, it is presumed that there is some line between truth and falsity, and therefore, to identify when something is in alignment or violation of truth the definition of truth itself must be established. The argument will operate about truth in its alignment with the correspondence theory. Today the correspondence theory is known as the literal or “real version” of truth. Stanford’s Encyclopedia of Philosophy states that in its simplest form, the theory can be defined as the following: “x is true if x corresponds to some fact; x is false if x does not correspond to any fact.”
Many philosophers agree with the correspondence theory merely due to its obviousness. For example, Descartes says that he has “never had any doubts about truth, because it seems a notion so transcendentally clear that nobody can be ignorant of it...the word ‘truth’, in the strict sense, denotes the conformity of thought with its object.” Similarly, William James expresses that “truth, as any dictionary will tell you, is a property of certain of our ideas. It means their ‘agreement,’ as falsity means their disagreement, with ‘reality.’” Establishing truth as defined by the correspondence theory provides a clear discernibility between truth and falsity. Yet, other theories of truth define truth differently. For example, the constructivist theory defines truth as the “moral principles we ought to accept are the ones that agents would agree to or endorse were they to engage in a hypothetical or idealized process of rational deliberation” (Stanford Encyclopedia of Philosophy). In brief, constructivism suggests that truth is constructed by society rather than something inherent. The consensus theory similarly bases truth on the subjective understandings of humans–defining truth as a series of “social agreements” (Oxford Reference). Thus, the correspondence theory assumes that truth is objective whereas other definitions, in part, assume truth is rooted in subjective experiences. The key reason why the following misinformation crises will be analyzed under the correspondence theory is that, as the next sections will reveal, the problem of misinformation often arises when man interprets subjective truths as fact as a means of catering to his impulsivity.
Misinformation in the Bible
By analyzing previous misinformation crises, it is evident that each stems from instances in which man begins to prioritize his impulsivity over truth. Arguably, the earliest forms of misinformation can be found in the Bible. The stories of the Bible reveal that humans find pleasure in embracing perversions because it is easier for man to heed his flesh than to seek truth. The beginning of the Bible opens with a story in which Eve is persuaded by Satan to eat a forbidden fruit in the garden of Eden. Despite God’s stating prior that if she touched the forbidden fruit she would die, Satan says to Eve that surely she will not die “for God knows that when [she eats] of it [her] eyes will be opened and [she] will be like God, knowing good and evil” (Genesis 3:5). When she saw that “the fruit of the tree was pleasing to the eye, and also desirable for gaining wisdom, she took some and ate it” (Genesis 3:6). In the story, Eve’s desire for knowledge and power overcomes her ability to obey the truth God initially presented to her, demonstrating humans’ inherent nature to overlook the objective for what seems more visually attractive or beneficial.
Another Biblical story demonstrating man’s tendency to overlook facts out of one’s flesh is when the Pharisees persecuted Jesus. Despite knowing that Jesus’ miracles were legitimate, the Pharisees were not concerned about whether or not Jesus was or was not the son of God, but only about their political status. In a meeting the Pharisees call, they ask, “What are we accomplishing? [...] Here is this man performing many miraculous signs. If we let him go on like this, everyone will believe in him, and then the Romans will come and take away both our place and our nation.” (John 11:47-48). Therefore, though the Pharisees recognize that Jesus’ acts are irrefutable, they are primarily concerned with Jesus as a threat to their political standing, choosing to revel in a false reality. On the whole, Biblical stories illustrate an early recurring theme in civilization of humans succumbing to misinformation in exchange for pursuing their desires and personal satisfaction.
Misinformation in the West
A similar thread of impulsivity’s erosion of truth is revealed by analyzing Western misinformation crises. Amidst the West’s diverse history of misinformation, one prevalent example was the Salem Witch Trials of 1692, in which colonists of Massachusetts accused over 200 people of practicing witchcraft, otherwise known as the devil’s magic, resulting in the execution of 20 citizens. The trials are attributed to the sense of paranoia and injustice that many early settlers adopted, along with the strong religious influences that taught that the devil would select certain people to carry out harm to others; these people were known as witches. Questioning to determine whether one was practicing witchcraft was notoriously unfair and driven by fear rather than tangible evidence. The first person accused of witchcraft during the trials was a woman named Bridget Bishop. Her gossipy habits and promiscuity prompted her questioning, and despite stating that she was as “innocent as a child unborn,” the defense did not find her case to be convincing and she was later hanged as punishment. Many paintings that documented the trials even suggest that the artists were sympathetic to those accused based on the gentle depictions of the convicted. The trials are a testament to unchecked emotion’s ability to intervene in man’s critical decision-making, overriding his sense of logic and truth. People’s mere anxiety surrounding witchcraft potentially led to the deaths of innocent civilians.
Similarly, during the Red Scare in the 1950s, Americans feared the spread of communism amidst Cold War tensions, which caused many innocent Americans to be accused and convicted of being communists. This phenomenon became known as McCarthyism, the Senator Joseph McCarthy at the end of the movement. Many of McCarthy’s actions to eradicate communism were seen as reckless, as innocent Americans were losing their jobs due to being blacklisted and his attacks even came to include President Eisenhower as well as other U.S. leaders at the time who held public trust. Even one merely in contact with someone who had been accused of being a communist suffered employment issues. Mccarthyism serves as an example of how mass anxiety can lead to irrational actions and decisions, prioritizing the need to soothe fears over the consideration of the factual basis beneath those fears. The drive to eliminate the perceived communist threat overrode the necessity of holding fair trials and garnering evidence, resulting in the unjust persecution of numerous Americans.
A more recent example of a misinformation crisis in the West is the COVID-19 outbreak that emerged in 2020, in which the combination of anxious Americans looking for answers and pressured health institutions, gave birth to mischaracterization of protective measures, false treatments, and conspiracy theories. The National Library of Medicine explains that in the context of COVID-19, amidst “a mass hysteria, people of a group start to believe that they might be exposed to something dangerous, such as a virus or a poison. They believe a threat to be real because someone says so, or because it fits their experience” (National Library of Medicine). In other words, when threatened by a health crisis like COVID-19, people believe that it caters to their fears and subjective experiences. However, man’s impulsivity conflicts with the nature of truth. In a different article investigating the impact of misinformation during COVID-19, the National Library of Medicine states that “several myths have also become common hearsay such as vaccines negatively affecting fertility in women and vaccines altering the genetic makeup of recipients” yet in these instances “strong scientifically proven evidence to support the effectivity of any is yet to be seen” (National Library of Medicine). Thus, misinformation spread as a result of people impulsively seeking answers when the facts failed to keep up, ultimately contributing to the erosion of truth in the face of hysteria. Though misinformation in the West is far from limited to the Salem Witch Trials, the Red Scare, and COVID-19, the three serve as pivotal examples of how public anxiety in the West has overlooked objective fact–man wants answers, he wants security, he wants safety, whether that is from witches or communism or a virus and by elevating those desires, man allows his impulsivity to eat away at fact. Additionally, the erosion of truth in society leads to unnecessary destruction, whether it is public confusion about COVID-19, damage to people’s livelihood during the Red Scare or the loss of innocent lives in the Salem Witch Trials.
Militaristic Misinformation in the West
When observing the presence of misinformation in Western society throughout history, an important distinction to make is the context in which it is being employed. Misinformation has often served as a strategic tool in a military context, where its purpose has been purely strategic, not involving the bending of truth to cater to impulse. For example, when in ancient Greece, the Athens and Spartans used propaganda to harness political power amidst war, or when in the Renaissance, pamphlets were produced with falsities to push religious and political agendas.
An example from World War 2, includes Operation Quicksilver, in which on D-Day on the coast of France the Allies created a fictitious First U.S. Army Group with false radio transmissions and inflatable tanks to confuse the Germans about the invasion. Additionally, in the Cold War, the US and Soviet Union engaged in electronic warfare and used radio countermeasures to mislead each other about their military intentions and abilities. Ultimately, despite being a form of misinformation, it is not cases in which misinformation has been used for militaristic gain that demonstrate man’s inherent desire to pursue his impulse. In a military context, there is a level of planning involved that formally pre-acknowledges the usage of misinformation, contrasting from separate cases in which man more sporadically invests in falsity.
AI–Fueling Man’s Impulsivity Today
Observing the Western history of misinformation crises, it is clear that they manifest in moments where humans neglect truth to achieve some ulterior desire, often involving catering to man’s impulse. In the context of artificial intelligence, its widespread proliferation serves as the continuation of this pattern, as AI is a tool that continues to appeal to man’s impulsive nature; however, while the term alone frequently makes headlines, what exactly is AI? The University of Illinois Chicago defines the new technology as “a branch of computer science that aims to create machines capable of performing tasks that typically require human intelligence. These tasks include learning from experience (machine learning), understanding natural language, recognizing patterns, solving problems, and making decisions” (University of Illinois Chicago). Furthermore, what sets AI apart from other computer systems is that it holds the unique ability to adapt to inputs rather than simply follow instructions, thus, improving its abilities over time. Part of what makes AI development both exciting and daunting is that mankind has yet to see the ceiling of AI’s capabilities, given the rapid ways the technology has improved in recent years. Artificial intelligence is a general term, encapsulating various subsets of application: machine learning aims to enable a system to make decisions based on data, neural networks are inspired by the structures of the brain, aiming to achieve the goal of perfecting natural language processing and learning to strategically play games, and natural language models specifically aim to grasp, interpret and generate human language.
Combining all its different capabilities, AI promotes a streamlined lifestyle where mundane tasks have become instantaneous and the possibilities for creative expression are boundless. For example, AI can optimize calendars, analyze data, and flat-out save society a lot of time. A blog post by the general AI company predicts that AI will cut time spent on short-form writing by about 75-90%, repetitive coding tasks by 50-80%, and data gathering by about 50-70%. When it comes to human creativity, man no longer has to cultivate the skill of learning digital art to see his fantastical visions come to life. However, while it is evident that AI aligns well with man’s innate, impulsive anthropology, enabling society to complete tasks faster and more efficiently than ever, AI has started to form new cracks in man’s core identities, revealing a deeper issue within AI development–its misalignment with human purpose.
Misinformation in the Media: A Threat to the Civilian
Firstly, artificial intelligence is beginning to form cracks in man’s identity as a civilian by changing the landscape of democracy via misinformation. Infusing man’s public forum with misinformation is starting to prevent the average civilian from participating in effective discussion and deliberation. Some of the AI tools in the media spreading misinformation are deep fakes, fake news generation, and fake video generation. Deep fakes are being paired with voice cloning to promote false messages from political figures. AI is assisting internet trolls in making increasingly realistic fake news websites, and fake video generation via OpenAI’s Sora raises concerns over the spread of fake footage. Researchers at Virginia Tech even discovered that Adobe’s generative AI is on the verge of being able to realistically illustrate a news camera capturing fake content. In the words of Steven Livingston and Scott Edwards, forensic experts, deep fakes may “erode the trust necessary for democracy to function effectively,” both because “the marketplace of ideas will be injected with a particularly dangerous form of falsehood” and because “the public may become more willing to disbelieve true but uncomfortable facts.” There is a concern that deep fakes and other forms of misinformation in the media will work to diminish public trust altogether to the point where society will lack the discipline to make efforts to decipher what is real and what is not real.
Once trust in the public word is gone, democracy will not be able to operate as freely, as the value of individual contributions will automatically diminish if there is a level of mistrust between parties at first glance. In response to the call of the United Nations Special Rapporteur regarding the promotion and protection of freedom of speech, the Association for Progressive Communications published an article, stating that, “People no longer feel safe to express their ideas for fear of online harassment and of being targeted by disinformation campaigns; others feel paralyzed and silenced by the puzzlement and incertitude created by the surrounding information pollution and remove themselves from public debate concerning key issues of public interest” (APC). Therefore, the media, a space where man normally can engage in democratic discussion and receive trustworthy knowledge about current events, has now been injected with misleading information, undermining public trust and diminishing the integrity of mankind’s debates, given that participation is both limited and not meeting an intellectual standard.
Misinformation in Academia: A Threat to the Scholar
Artificial intelligence is not only changing the identity of the civilian but also the scholar. From student essays to research papers of prestigious labs, AI is shifting the purpose of the scholar away from genuine discovery towards the achievement of individual goals, often at the expense of integrity. Stephen Marche’s article The College Essay is Dead states that “the essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up” (Marche). As Kevin Bryan, an associate professor at the University of Toronto puts it, “the OpenAI chat is frankly better than the average MBA at this point” (Bryan). The repetitive structure of linking verbs connecting subjects to new predicate adjectives in both Marche and Bryan’s statements raises the question of what will happen when these declarative statements oppose each other. The disruption to education lies in this clash: in the eyes of educators, the college essay is the instrument that teaches students how to think critically, yet, simultaneously, AI is better than the average MBA. Thus, what will the young scholar do given this dilemma–use a powerful tool to enhance his capabilities, achieve an “x” grade as he is expected, or neglect a powerful tool to think critically as he is also expected? Looking at the ways that AI has influenced higher education does not bode optimism for the latter.
In one specific case, called Tadpole Paper Mill, a research lab was exposed for creating an algorithm that produced AI-generated blots. They got caught by data fraud expert Elisabeth Bik, who was suspicious of the tadpole-like shape of the blots and ran their images through a technology called image twin to find out that the backgrounds of each blot image were identical. The reason why this happened in the first place was because of a larger company called Paper Mill, which produces papers for doctors in China who need to get a scientific article published in an international journal to get their MD. So by enabling many academics to cut corners in the production of their pieces, AI is fueling a culture of quantity over quality in academia. Man is witnessing these specific researchers’ true motives come to light–to get one’s name on a paper, enabling them to get their MD, acquiring greater distinction. Thus, AI is perpetuating the narrative that scholars, arguably the world’s most ferocious pioneers of thought, can sacrifice truth to achieve these ulterior goals.
Misinformation in the Interpersonal: A Threat to the Connector
AI has also begun to seep into spaces of human connection, blurring the lines between simulated and authentic conversation. To connect, one no longer must engage with another human being but can engage in self-focused interactions by speaking to an AI. One significant way AI is changing these interactions is through chatbots. A study by the association of computer machinery found that people are less trustworthy of AirBnb profiles using AI messages than AirBnb profiles using human messages, which intuitively makes sense because man is likely to trust the host who is putting in effort to speak directly to his or her clients more than the host who is potentially less in less invested in their clients. This phenomenon is simply a testament to the idea that chatbots cannot replace the authenticity of face-to-face, genuine human connection.
Chatbots can be taken to the next level when there no longer is a human behind the chatbot at all–a popular example of this is the addition of “My AI” to Snapchat user’s profiles. Users can design these profiles to have specific personalities and eventually become tailored to be whoever you want them to be, responding actively and appropriately. The concern around these chatbots is that they will go too far in eroding the distinction between authentic human connection and AI interactions, causing people to dissociate and isolate themselves. Additionally, because AI interactions do not directly involve the same nuanced understanding that comes with in-person conversation, too much reliance on chatbots for stimulating social interaction could lead to a reduction of empathy. Thus, AI introduces this seed of subtle deception into human connection, in which it aims to replace authentic, unobstructed connections with other human beings, with simulated connections that cannot provide man with the same satisfaction as real relationships.
AI’s Misalignment with Man’s Teleology
Examining the newly formed cracks AI has formed in man’s core identities provides teleological clarity–that truth-seeking is the common denominator teleologically in man’s civic, scholarly, and connective pursuits: as a civilian, man seeks truth in the media and public discussion, as a scholar, in academic discovery and innovation, and as a connector, in authentic, human interactions. Therefore, a misalignment occurs when AI promotes the erosion of truth via misinformation when man is made to truth-seek.
AI, an End; Truth-Seeking, a Means
Dostoevsky’s Brothers Karamazov can help explain why, philosophically, these cracks are forming in man’s identities. He states that “The mystery of human existence lies not in just staying alive, but in finding something to live for” (Dostoevsky). In other words, the meaning of life is not to pursue the end of staying alive but to engage in the means of finding something to live for. This quote helps explain why AI misaligns with human purpose because, for example, when it comes to academia when man’s purpose becomes merely to “stay alive” perhaps by producing AI-generated research that secures him an MD, academia begins to fall apart because he starts to lose his inherent purpose as a scholar which is to seek truth. Thus, these cracks in his identities begin to form because while AI provides an end, whether that is content generation, a finished research paper, or simulated human connection, truth-seeking is a means, and it is the means or the action of pursuing truth that provides man with fulfillment. To have effective democratic discussion, man relies on the integrity of the media, to progress as a scholar he builds upon previous credible discoveries, and when he interacts with other human beings man pursues genuine, authentic connection.
The Teleological Bones of AI
In addition to a means vs end framework, tracking AI’s original teleological roots and understanding its basic functions can help explain why AI is misaligned with man’s teleology. The book The Misalignment Problem by Brian Christian details how the first machine learning models were created, explaining that they were built upon the associations of words as vectors in a digital space. Thus, some of the first natural language models would have you type Paris minus France plus Italy, outputting Rome, or king minus man plus woman, outputting queen. However, AI has a limited understanding given limited inputs. As programmers continued to test out these models, they started to see word associations that were purely based on, well, word association and not based on truth. For example, a programmer minus man plus woman would output homemaker, or doctor minus man plus woman would output nurse–associations biased to the frequency of their association and not true in every case.
Another difference teleologically about AI, is that it is made to satisfy statements of logic, and it’s limited to that. For example, in the early 2000s, a programmer named Dario Amodei wanted to create an algorithm that would play this little boat video game with human skill, aiming to maximize points earned throughout the race. However, the program found a loophole where it could achieve infinite points by doing donuts in a small harbor, revealing that AI doesn’t understand objective function–it did what the programmer told it to do, to achieve the maximum amount of points, but did not understand the inherent purpose of the game: to finish the race.
Understanding the Misalignment
It is clear that AI misaligns with human teleology of truth-seeking, but hidden in that statement is the idea that AI has a unique teleology that can be appreciated in and of itself. The current friction society is experiencing with AI is the result of society tasking AI with a purpose that it was not built for. This raises the following question: What would happen if man individually appreciated the telos of AI separate from that of himself? By making this distinction in purpose, man can arguably maximize his gifts and the benefits offered to him through AI. For instance, AI will always be infinitely faster at computing than a human, with an article by SimpliLearn stating that “in the instance that the human mind can answer a mathematical problem in five minutes, artificial intelligence is capable of solving ten problems in one minute” (SimpliLearn). To further contextualize the rapid speed of the computer systems under which AI operates, an article by Ignitarium states, that “even assuming that it takes about 40 CPU cycles to transfer the data from memory before processing, a computer can do a single operation including data fetch in about 40 nanoseconds. Compare this to the human neuron which collects inputs from a synapse, processes it, and transfers it to the next neuron in about 5 milliseconds. This would mean a computer system is 125,000 times faster than the human neuron” (Ignitarium). Thus, without a doubt AI via its rapid computing and problem-solving powers can revolutionize human productivity, yet as revealed through some of its earliest natural language pitfalls, it struggles with grasping the nature of truth.
However, just as man does not spend hours tirelessly plugging away numbers in the presence of a computing system, why then task AI with pursuing truth when it is man who holds this inherent drive? In a podcast with Lex Fridman, Steven Pinker uses a metaphor to illustrate this idea. He observes that he is wearing cotton which “feels much better than the polyester,” yet makes the distinction that “it's not that cotton has something magic in it and it's not that if there was that we couldn't ever synthesize something exactly like cotton but at some point it's just not worth it–we've got cotton. Likewise, in the case of human intelligence, the goal of making an artificial system that is exactly like the human brain is a goal that no one's going to pursue to the bitter end because if you want tools that do things better than humans you're not going to care whether it does something like humans” (Pinker). In other words, just as simulating cotton has its limitations, similarly, trying to encapsulate the complexities of the human brain in an artificial system is prone to plateau, especially when human beings already exist. Thus, man can milk the benefits of AI not by forcing the inner workings of a human onto an artificial system nor by over-glorifying AI’s powers to the detriment of society, but by recognizing that AI and humans have separate purposes and unique capabilities; rather than aiming to replace one another, they should harmonize and complement each other.
Sam Altman followed a similar line of logic, stating in a podcast with Lex Fridman that, “artists were also super worried when photography came out and then photography became a new art form and people made a lot of money taking pictures. I think things like that will keep happening. People will use the new tools in new ways.” Therefore, it seems like AI is replacing a lot of human innovation in a lot of ways, and, coupled with our nature, it can easily become this tool that eliminates our need to think critically; yet, like Altman was saying mankind is still at the beginning of this process, and it is going to take time to train society how to use AI properly so that it can become a way to enhance our thinking rather than a way to replace it.
The Future of AI Development: Solutions and Limitations
As humans start to recognize the root causes of the misalignment between themselves and artificial intelligence and adopt the proper mental frameworks to maximize the benefits of this technology, various logistical changes can be implemented to streamline AI’s ability to enhance human life–many of which are already being discussed. Walid Saad, a top-notch computer engineer from Virginia Tech explains that users bear the new “responsibility not to amplify or share false information, and additionally, users reporting potential misinformation will help refine AI-based detection tools, speeding the identification process” (Saad). In time, implementing user agreements, detection technology, and social expectations may be able to minimize the spread of misinformation via AI. A platform that has started to change the culture of misinformation is X, formerly known as Twitter, which has added a new feature called Community Notes, flagging misleading posts with a note that clarifies what is being displayed. While changing the culture and increasing awareness of misinformation has already started to take place, an area that man is struggling with is how to take effective legal measures to help healthily regulate AI. Cayce Myers, communications policy expert at Virginia Tech, explains that legal measures still have limitations of their own, as “there have been calls to hold the AI platforms legally responsible for disinformation, an approach that may result in internal guardrails on creating disinformation. However, AI platforms are still developing and proliferating, so a full-proof structure that prevents AI from creating disinformation is not in place and likely would be impossible to create” (Myers). An article by Brookings also points out that, “for most topics and events, there simply won’t be the resources to supply staffing dedicated to individually monitoring each of the essentially limitless list of situations in which disinformation might arise” (Brookings). Thus, it is evident that while some regulations are quick fixes or easy to implement, others are much more complex and either will require a longer time to develop or are simply impossible to create.
Self-Examination: A Remedy
Ultimately, as AI forms its cracks in his core identities, man’s fate lies in how critically he examines himself and is willing to fend for his needs. Zadie Smith’s Lazy River encapsulates the predicament society finds itself in today, conveying the harsh truth that by continually heeding to impulsivity, man will eventually numb himself to the ability of self-examination, losing touch with reality. Smith depicts a society in which while citizens have not experienced various cultures, they still hold knowledge of their existence in the outside world. She describes a specific scene in which the river-floater spots a punnet of baby tomatoes with a bar code reading “PRODUCT OF SPAIN–ALMERIA;” the narrator says “the vision passed. It was of no use to me or anyone, at that moment, on our vacation. For who are we to–and who are you to–and who are they to ask us–and whosoever casts the first–” (Smith). As soon as the challenging questions about himself are asked, he retreats into a shell of comfort, the breakage portrayed by the aposiopesis effectively capturing the man’s neglect of self-examination. Yet, the end of the piece reveals that retreating into comfort does not change reality, but rather masks it, as someone still must “clean whatever scum we have left of ourselves off the sides” (Smith) of the lazy river. Thus, such a scene represents how critical it is that man continually self-examines despite his impulsivity encouraging him to avoid this difficult task. The weight of the choice man must make to absolve the tension between his impulsive anthropology and truth-seeking purpose is immense. With a tool that aligns so well with his impulsive tendencies, man must recognize the necessity of pioneering for the truth of who he is, because if he chooses not to, he may slip away into a state of mindlessness, lacking an awareness of his reality.
Conclusion: Uncharted Territory
Navigating the uncharted territory of AI’s integration into society demands a nuanced approach that balances technological advancement with the preservation of human values. While AI’s potential to revolutionize productivity, enhance problem-solving, and facilitate innovation is undeniable, its erosion of truth in media, academia, and human connection emphasizes the need for ethical regulations and responsible use. By acknowledging the independent teleologies of AI and humans, society can develop frameworks that leverage AI’s strengths while continually enabling man to pursue truth and authenticity. Man’s ability to critically examine himself is crucial in countering the impulsive tendencies that AI may enflame. By consistently adhering to his teleology and seeking truth in self-understanding, man can shield himself from descending into the mindless trance facilitated by indulgence. As AI advances, the future remains uncertain; however, Steven Pinker’s discussion with Lex Fridman offers a hopeful perspective–the idea that if man is smart enough to create AI, then surely he can be smart enough to learn how to use it.
Works Cited
- AIMS Public Health. “The impact of misinformation on the COVID-19 pandemic.” AIMS Public Health, vol. 9, no. 2, 2022, pp. 262–277. PubMed Central, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9114791/. Accessed 28 May 2024.
- Bagnoli, Carla. “Constructivism in Metaethics.” Stanford Encyclopedia of Philosophy, 2011, https://plato.stanford.edu/entries/constructivism-metaethics/. Accessed 14 May 2024.
- Bik, Elisabeth. “The Tadpole Paper Mill.” Harbers-Bik LLC, 2020, https://scienceintegritydigest.com/2020/02/21/the-tadpole-paper-mill/. Accessed 10 May 2024.
- Blumberg, Jess. “A Brief History of the Salem Witch Trials.” Smithsonian Magazine, 2007, https://www.smithsonianmag.com/history/a-brief-history-of-the-salem-witch-trials-175162489/. Accessed 8 Dec. 2023.
- Chambers, L. P. “Plato’s Objective Standard of Value.” The Journal of Philosophy, vol. 33, no. 22, 1936, pp. 596–605. JSTOR, https://doi.org/10.2307/2017018. Accessed 10 Nov. 2023.
- Chant, Christopher. “Operation Quicksilver (ii).” Codenames, 2023, https://codenames.info/operation/quicksilver-ii/. Accessed 7 Dec. 2023.
- CHI '19. “AI-Mediated Communication.” Association for Computing Machinery, 2019, https://doi.org/10.1145/3290605.3300469. Accessed 14 May 2024.
- Christian, Brian. The Alignment Problem. W.W. Norton & Company, 2020.
- “Consensus Theory of Truth.” Oxford Dictionary of Sports Science & Medicine, 3rd ed., Oxford University Press, 2024. Oxford Reference, https://www.oxfordreference.com/display/10.1093/oi/authority.20110803095633759. Accessed 14 May 2024.
- “Disinformation and freedom of expression.” Association for Progressive Communications, 2021, https://www.ohchr.org/sites/default/files/Documents/Issues/Expression/disinformation/2-Civil-society-organisations/APC-Disinformation-Submission.pdf. Accessed 8 Dec. 2023.
- Dostoevsky, Fyodor. The Brothers Karamazov. The Russian Messenger, 1879.
- Dotts, B. “Classic Philosophy: Plato.” University of Georgia, https://open.online.uga.edu/critical-contemporary-education/chapter/chapter-2-classic-philosophy-plato/. Accessed 17 Nov. 2023.
- Fridman, Lex. “Sam Altman: OpenAI, GPT5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419.” The Lex Fridman Podcast, 2024. https://youtu.be/jvqFAi7vkBc?si=077FvRUBOvazTZyk. Accessed 20 Mar. 2024.
- Fridman, Lex. “Steven Pinker: AI in the Age of Reason | Lex Fridman Podcast #3.” The Lex Fridman Podcast, 2018. https://youtu.be/epQxfSp-rdU?si=wRGhtXNn4igo-iU_. Accessed 14 May 2024.
- Howarth, H. D. “Plato’s Theory of Forms: A Look Into Objective Truths.” Hang Fire Review, Hang Fire LLC, 2022, https://hangfirereview.com/platos-theory-of-forms-a-look-into-objective-truths/. Accessed 17 Nov. 2023. [webpage taken down]
- Int J Environ Res Public Health. “COVID-19 and the Political Economy of Mass Hysteria.” Int J Environ Res Public Health, vol. 18, no. 4, 2021, p. 1376. PubMed Central, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7913136/. Accessed 28 May 2024.
- Kirkham, Richard L. “Correspondence Theory.” Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Published 28 May 2018. https://plato.stanford.edu/entries/truth-correspondence/#3. Accessed 7 Dec. 2023.
- Levinger, Matthew. “MASTER NARRATIVES OF DISINFORMATION CAMPAIGNS.” Journal of International Affairs, vol. 71, no. 1.5, 2018, pp. 125–34. JSTOR, https://www.jstor.org/stable/26508126. Accessed 17 Nov. 2023.
- Marche, Stephen. “The College Essay Is Dead.” The Atlantic, 2024. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371. Accessed 29 May 2024.
- OpenAI. “A digital illustration of a robotic hand typing on a laptop” AI-generated image. DALL·E. https://openai.com/dall-e. Generated 29 Jun. 2025.
- Ray, Benjamin. “Salem Witch Trials.” OAH Magazine of History, vol. 17, no. 4, 2003, pp. 32–36. JSTOR, http://www.jstor.org/stable/25163620. Accessed 8 Dec. 2023.
- Smith, Zadie. “The Lazy River.” The New Yorker, 2017. https://www.newyorker.com/magazine/2017/12/18/the-lazy-river. Accessed 29 May 2024.
- Software Engineers Blog (TheGeneralAICo.). “How Much Time Can Generative AI Save in the Workplace?” Medium, 2023. https://medium.com/@ibrahimmukherjee/how-much-time-can-generative-ai-save-in-the-workplace-3b2f9ed7fa6f. Accessed 13 May 2024.
- Ursic, Marko, and Andrew Louth. “The Allegory of the Cave: Transcendence in Platonism and Christianity.” Hermathena, no. 165, 1998, pp. 85–107. JSTOR, http://www.jstor.org/stable/23041272. Accessed 17 Nov. 2023.
- Villasenor, John. “How to deal with AI-enabled disinformation.” The Brookings Institution, 2020. https://www.brookings.edu/articles/how-to-deal-with-ai-enabled-disinformation/. Accessed 28 May 2024.
- Virginia Tech. “AI and the spread of fake news sites: Experts explain how to counteract them.” Virginia Polytechnic Institute and State University, 2024. https://news.vt.edu/articles/2024/02/AI-generated-fake-news-experts.html. Accessed 30 May 2024.
- “What is (AI) Artificial Intelligence?” The Board of Trustees of the University of Illinois, 2024. https://meng.uic.edu/news-stories/ai-artificial-intelligence-what-is-the-definition-of-ai-and-howdoes-ai-work/. Accessed 7 May 2024. [webpage taken down]
Website created by Hannah Webb | Visuals generated with DALL·E, powered by OpenAI