Web AI and The Matrix Effect: How Generative AI Is Reshaping Human Cognition
Image by Jamillah Knowles & Reset.Tech Australia / ? https://au.reset.tech/ / Better Images of AI / Detail from Connected People / CC-BY 4.0

Web AI and The Matrix Effect: How Generative AI Is Reshaping Human Cognition

Another Deep Dive from Philosophy2u !

Knowledge is mediated. Our forms of knowing are mediated through language, symbols, historical and cultural values, and first-person perspective. This means that there is no such thing as a purely objective point of view.

This innate feature of human existence is what philosophers term pre-understanding, or all the conceptual, emotional, and historical "stuff" that we inherit and that comprises the lens through which we encounter the world and others.

While this may seem deeply problematic, think of it instead as how each individual is uniquely configured, rendering an expressly personal viewpoint that enables them to have unique relationships with others.

Pre-understanding becomes problematic when we forget that our ways of relating and knowing are mediated through it. We then assume that what we think is not just correct but to some extent incorrigible. Age and gender discrimination are common forms of this.

Or, we can assume we are correct by virtue of our pre-understanding, where all those things that inform my point of view are the right ones. Cultural imperialism and nationalism are classic instances of this.

This is why an education and training in the humanities is so important. It cultivates skills and capacities to offset the blinkered, less desirable effects of pre-understanding by teaching critical forms of analysis and self-examination.

  • History, for instance, not only informs us about the past but teaches us how to read and interpret documents and stories and how to apply the lessons we learn from these sources to critically address our current problems.
  • Literature and Art connect us with living traditions of authorship through which we can relate to the past and cultivate a critical imagination to discover ways in which individuals and societies can be innovated and revitalized.
  • Philosophy provides us with skills of conceptual analysis, argumentation, and critical thinking. Its program can be roughly divided into the aim of understanding how things really are and the aspiration to enable humans to flourish individually and collectively.

Absent this broadly liberal arts education, individuals not only lose the means to empower themselves; they subject themselves to a debilitating condition that corrodes the capacities needed to articulate, elevate, and free themselves – critical analysis, imagination, and argumentation.

I'll refer to his going forward as "the liberating capacities".

Enter Generative AI which, on the face of it, seems to enhance our capacity to know and reflect since it facilitates the search for knowledge. Enter a prompt in an AI chatbot and get an answer in seconds.

However, there are good reasons to be wary of this next phase of technologically mediated knowledge. It may very well change how we relate to our liberating capacities. This is because AI is creating a condition in which either

  • we think that we are engaging our liberating capacities by virtue of using AI, OR
  • we forget about the importance of our liberating capacities altogether.

To get a sense of why, let us consider how we typically vet information via the conventional search engine in the era of Web 2.0.

"My Spider Sense Is Tingling"

How do you verify that your sources of information are accurate and credible?

The answer to this question may vary somewhat based on your generation. Older generations may remember doing research for term papers with a fair amount of investigative work in library stacks or combing through periodicals on microfiche. There's something nice about this form of discovery since it marries physical, intellectual, and emotional effort. (The library where I did most of my research for my PhD didn't have an elevator for students, and the philosophy section was on the fourth floor!)

Web 2.0

Younger generations will identify with investigation and research through the use of the internet. With Web 2.0, our current iteration of the World Wide Web, most online content is user-generated. Much of its consumption therefore involves an ability to evaluate the authenticity and accuracy of sources one comes across in a search.

Unlike physical publications, production costs are minimal for Web 2.0 outputs. And so, it is easier for anyone with internet access to create a webpage to disseminate ideas and beliefs. Because of this, there tends to be a correlation between the ease of creating content and the onus to vet sources (assuming that creating content includes its publication):

The easier it is to create content, the greater the onus to vet sources.

Let's take an example:

  • If I wanted to find out about a topic – say, legal precedents in the state of Florida where a moving company was sued for “bait and switch” practices – I would first need to figure out the best wording for the query to return relevant results.
  • If I were a little savvy and critical, I would be aware that the top-ranking hits would be determined by a number of factors relating to search engine optimization practices, which may not necessarily correlate with accuracy or quality.
  • For instance, Wikipedia articles often rank in the top 1 to 3 of a search. Yet, its publicly curated format means that some entries can be unreliable. At the end of the day, the quality of information for my search would rely somewhat equally on the sources provided and my ability to evaluate them.

I’ll come back to this example. In the meantime, let's consider why the Web 2.0 version of research and investigation is not as clearcut and efficient as it might first seem.

According to a study by Chartbeat, a media strategist company, that was cited by Time Magazine, "a stunning 55% spent fewer than 15 seconds actively on a page". In terms of pre-understanding, this probably means readers are either:

  1. uninterested after 15 seconds, or
  2. they have confirmed their previously held ideas by scanning a page's key claims and accepting or rejecting them (i.e. confirmation bias).

If you think hyperlinking sources in an article is sufficient for supporting claims, think again. On a lot of questionable websites, the articles and blogs often report "facts" yet link to their own articles to present the impression of having done research. Or, the articles link to sites that appear to be academic in nature yet are not.

?? This describes the iterative, or echo-like, nature of Web 2.0. Its tendency to let confirmation biases persist prevents one's pre-understanding from being able to check and revise itself when appropriate.

As we'll see with Generative AI, the nature of searching for answers and information on the Web changes categorically. We enter a new relation to the process of verifying and validating information.

Like Catching Flies

Generative AI is becoming increasingly embedded in the apps we use. If you use search engines like Google, Bing, or OpenAI's new SearchGPT, you'll see at the top of your search results something like an "AI overview" of your topic. As long as the other search hits are provided, the AI overview can be helpful to compare the accuracy of information.

A straightforward example involves asking about the cooking time for a whole fish on a grill. The AI overview will offer a range that can be compared to specific recipes that tend to take into account more specific features like the weight, type of fish, and how many times to flip the fish.

But now imagine a situation where the ranked sites in a typical search don't appear. Perhaps with AI integration, tech companies find that users follow the path of least resistance.

It's not even a question of spending 15 seconds on a site. It's more a matter of users wanting the quickest answer without having to sort through sites.

Then, we're kind of like flies caught in a web since our access to information is limited to what a few AI chatbots can generate. This is Web AI. It's AI-curated information.

Web AI

It's a scenario where the tendency of human behavior to opt for the least amount of effort in most things – or what John Stuart Mill termed homo economicus – replaces the practice of investigating and vetting materials. Whether out of laziness or complacency, humans tend to not want to exert much effort or change their methods and habits unless compelled to.

To get a better sense of the kinds of risks I have in mind, let us draw on a worn trope. The Matrix describes a world in which most humans have no idea they are living in a simulated reality.

There is a telling moment when one of the characters who knows about the Matrix deliberately chooses to live in the simulated reality as opposed to the real (dystopian) world. This moment captures how many of our attitudes are shaped by the prioritization of familiarity and ease.

You may remember Cypher's remarks:

“You know, I know this steak doesn't exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realize? [Takes a bite of steak] Ignorance is bliss.”

There are several kinds of ignorance. If Generative AI were to subdue our inclination to investigate and vet sources and replace our practices of doing so, it's not much of a stretch to imagine a scenario where most humans simply don't know how to critically assess sources. Call this AI-generated ignorance.

Let's return to the topic of moving companies in Florida to see how AI-generated ignorance might change how we engage with the verification of sources.

When asking an AI chatbot for legal precedents where a plaintiff alleged "bait and switch" practices by moving companies operating out of Florida, it returned a few items:

For each legal case, the chatbot gave a very convincing summary of how the defendant was found in violation of fair-trade practices. Impressive?

Unfortunately, only one case was relevant, Swenson v. Storage, LLC, where the defendant was found guilty of holding the personal items of the customer ransom until he paid a much higher moving fee. Williams v. WMX Technologies is a real case but involves allegations of fraud in a securities case.

Mullins v. Allied Movers is a fictitious case that was "hallucinated" by the chatbot. However, upon closer investigation, the chatbot most likely recognized to some degree the case "Office of the Attorney General of the State of Florida v. Alliance Moving and Storage LLC et al", where the court granted a default judgment against the defendant for failing to present a defense over allegations of fraud.

What can we learn from something like this?

?? It's actually not that Generative AI can be wrong. This is true but beside the point. Rather, the issue is the impression it gives. It encourages us to believe that it has done a great deal of the investigative work.

For more on how AI generates answers and why it sounds authoritative:

Welcome to Your Web AI Matrix

Noting this power of impression, let us return to the philosopher's concern about simulated realities as articulated by Morpheus in The Matrix:

"Have you ever had a dream, Neo, that you were so sure was real? What if you were unable to wake from that dream? How would you know the difference between the dream world and the real world?"

The problem that philosophers note, despite their many attempts to resolve this thought experiment, is that there is no way to be certain that you are not still dreaming. This is because your perspective would require access to something external to the dream world to verify that your perspective is not caught within a dream. Since your perspective is always "from the inside" or "within the world", you have no way of knowing when you might "step" out of the dream world. Even taking the "red pill" (per the film) will not work since if one were to do so, it would still be from within the dream world. [Sorry, Alt-Right! You're still living in a dream.]

The Web AI Matrix is not so nefarious and all-encompassing. At least not yet.

The current state of Generative AI is very much constrained by training methods focused on safety and accuracy. The dream, as it were, is not intended to be mis- or dis-informative. But it can very well be debilitating. In other words, solely relying on Generative AI is like being spoon-fed, or worse yet, like being hooked into a machine that feeds you information. As a result, we can lose both our inclination and our capacity to investigate and understand.

Neo: Why do my eyes hurt?
Morpheus: You've never used them before.
Neo in the reflection of Morpheus' sunglasses

Critics of my view may allege that I've overlooked how many forms of technology that enhance human abilities have not resulted in debilitating conditions. This is true.

However, my argument assumes that the kinds of enhancements provided by Generative AI and other forms of technology that affect human cognition are different. Learning to use a calculator is an illustrative example. It replaces the need for arithmetic skills and knowledge of formulas. However, it does not necessarily replace one's understanding and relation to number as such. Its cognitive influence is therefore limited.

In contrast, technologies like GPS involve a cognitive change in our understanding and relation to the world. A McGill University study has shown that frequent use of GPS makes us less capable of remembering and navigating physical space and location. I would allege the same for Generative AI and the human capacity to investigate. We lose a fundamental ability to be aware critically and to know how to discover and expose the evidence for further insight and clarity.

And, this brings me back to the issue of the dream world.

While there appears to be no answer to the problem articulated by Morpheus (and philosophers before him), there is a response that heads off the problem before it can gain traction.

No Pills: Practice Those Liberating Capacities

There may be no answer that provides 100% certainty; but of course, there never is given our personal, first-person perspective. As every athlete and musician knows, the only way to prevent the atrophy of your "muscles" (whether they be physical, intellectual, or emotional) is to regularly use them.

It comes down to practice and finding ways to subordinate AI to these practices. While ensuring you don't fall prey to "the path of least resistance" when doing research online is one way to keep in focus, another path proposes an interesting and ironic twist.

It may just so happen that due to the ways our lives are becoming more entwined with AI, we may deliberately seek "dumb" spaces where we can apply ourselves in order to engage our whole bodies and minds. Perhaps, picking up a physical book and slowly mulling over its details and implications without the need or desire to simply react or accumulate information will become a new form of liberation.

In the end, all on which we can rely comes down to sustaining and improving our capabilities;

  • to investigate the matter at hand;
  • to hold beliefs based on evidence and reasoning;
  • to be prepared to revise those beliefs on pain of better evidence and reasoning.

To hold and act otherwise, as one famous German philosopher put it, would be scandalous!


About the Author

Todd Mei (PhD) is former Associate Professor of Philosophy specializing in hermeneutics, the philosophy of work and economics, and ethics. He is currently a researcher and consultant in meaningful work and is founder of Philosophy2u. He also enjoys training chatbots on the side for major social media and tech companies (who shall remain anonymous). With over 20 years of experience in teaching, researching, and publishing, Todd enjoys bringing insight, innovation, and worklife revolution to organizations, businesses, and individuals.

#KnowledgeMediation #PreUnderstanding #CriticalThinking #HumanitiesEducation #PhilosophyOfKnowledge #GenerativeAI #CulturalImperialism #LiberalArts #CriticalAnalysis #MindfulLearning #IntellectualEmpowerment #AIandHumanities #ResponsibleAI #AIandEthics #AIEthics #EthicalAI #AITransparency #AIandSociety #AIRegulation #AIandBias

要查看或添加评论,请登录

Todd Mei, PhD的更多文章

社区洞察

其他会员也浏览了