AI is made for bullshit
Review article
Shannon Vallor: The AI Mirror. How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford UP 2024.
“I'll be your mirror. Reflect what you are, in case you don't know”. So sings Nico on The Velvet Underground And Nico from 1967. The line is quoted on page 1 by Shannon Vallor in the book The AI Mirror. How to reclaim our humanity in an age of machine thinking. The book analyzes how generative artificial intelligence (GenAI) can currently be understood vis a vis human civilization and its development.
While many have pointed to GenAI as an example of a revolutionary breakthrough and technological achievement, Vallor is more skeptical. She does believe that GenAI can be used in many fruitful ways that can enhance human thinking and help solve challenges. But otherwise, she focuses more on the conservative and preserving power of GenAI, which perpetuates some problematic aspects of the way developed societies have evolved. Basically, artificial intelligence is a mirror of the way humans have thought, which is a way of thinking that has led to the global problems we currently face. “[Today's] most advanced AI systems are constructed as immense mirrors of human intelligence” (p. 2).
So where others have seen bias as a problem with GenAI, Vallor sees it the other way around. The problem of bias and other unhelpful aspects of the answers that can be generated using GenAI stems from the way human intelligence has operated. The values and logic of industrialized Western societies underpin the way generative artificial intelligence works. The big language models at the heart of GenAI are trained on data that reflects the way humans have expressed themselves and thought. “AI isn't developing in harmful ways today because it's misaligned with our current values. It's already expressing those values all too well. AI tools reflect the values of the wealthy postindustrial societies that build them” (p. 9).
Vallor has two main purposes for the book. Alongside showing how GenAI is a mirror of existing human thinking and values, she also wants to explain how GenAI in its current and commercialized form is a threat to our humanity by promoting what she calls machine thinking. To save humanity, artificial intelligence should not be used as a mirror, but rather try to break the mirror effect of technology and instead use it as an amplification of our own reflective capacity. All things being equal, this would be easier if the technology was developed for this instead of being designed to work against it.
The book is divided into seven chapters, each dealing with a different theme about how GenAI works and the consequences of its use.
In the first chapter, Vallor describes a problem with the concept of intelligence. What exactly do we mean when we describe a machine as intelligent? Intelligence is not easy to define, as there are many aspects to intelligence. Intelligence is more than just logic. When it comes to artificial intelligence, you may not need to know what intelligence is to design an artificially intelligent technology, if only you can get it to do something that humans also do, such as using language in a reasonably grammatically correct way.
But when it comes to what's called artificial general intelligence (AGI), it's a different story. It is a hot topic whether the current generative artificial intelligence is a step in the direction of an artificial general intelligence that will resemble a (human) consciousness. And here Vallor clearly belongs to the wing that does not believe that there is a straight path from artificial intelligence to general intelligence because this requires lived experience and a mental model of the world.
This argument is based on a phenomenological approach to the world, which she develops further in the second chapter. General intelligence requires an interpretive and context-dependent understanding of the world one is part of, which cannot be reduced to neural connections in the brain. Vallor writes that human thinking is connected to a person's embodied being in the world. “[Minds] almost certainly come into existence through the body and its physical operations... Our minds are embodied rather than simply connected to or contained by our bodies” (p. 39-40).
While GenAI is another step in development that has evolved with an intended value-free technical rationality, it is also another step away from humanity that has also been an important element in modern human development. The distance between the technical and moral domains, which (possibly) coexisted in antiquity and can be found in both the Renaissance and the Enlightenment, has become so great that it threatens humanity. Vallor talks about the problem of “The gap between our lived humanity and the AI mirror” (p.63). The latter is beyond ordinary moral control, as it is primarily developed with a dependence on large financial investments that more or less escape control as it is multinational.
领英推荐
In the next two chapters, Vallor draws on Aristotle's concept of phronesis and a concept of public reasoning. Phronesis implies that one must take into account the knowledge one possesses while taking into account contextual factors. Thus, morality is not a set of rules but a way of reasoning according to Aristotle, who points to pragmatic deliberation and negotiation (dialog) as part of the repertoire to be used. Unlike the mirror metaphor, where you always use the same model or template, phronesis suggests that there is a dynamic element in the moral balancing of different considerations. If humanity is an essential part of this consideration, then you can also say something about how technology should be developed, which would be a different approach than asking about efficiency and business models.
The idea of public reasoning can be attributed to Immanuel Kant, who talks about using reason in public so that it can be subject to argumentation and criticism. However, Vallor refers to the Canadian-American philosopher Wilfried Sellars and his concept that knowledge is developed in “a ‘logical space of reasons, of justifying and being able to justify what one says’ (p. 106). This suggests a pragmatic understanding of knowledge.
Vallor wants this logical space to also apply to the moral domain “a space of moral reasons” (p. 108). Although Vallor does not directly refer to public sphere theory, her reasoning is similar to the way Jürgen Habermas presents the moral in discourse ethics.
The challenge for the moral space now is that when applying artificial intelligence to judgment and decision-making, the moral muscle of humans is relaxed. “Today, AI poses a new threat to our moral capacity. Its offer to take the hard work of thinking off our shaky human hands appears deceptively helpful, neutral, and apolitical - even 'intelligent'. This only makes the offer more dangerous” (p. 109).
This is a “moral deskilling”, in the same way that we lose the ability to remember phone numbers or find our way with a map. But while the latter two are technical skills, the former is moral and a pillar of humanity. If we as humans lose the ability to reason morally, there is no humanity.
Generative artificial intelligence is designed to create bullshit in the sense that it can use algorithms to produce sentences and textual contexts that appear to be based on reasoning and knowledge but are actually based on pattern recognition of language. GenAI has no notion of what is true and morally good, only patterns of words that seem to relate to something in the outside world. Bullshit is a term for an indifference to what is true and right. Today, it is not the public use of lies that is a challenge, but the indifference to whether what is being said is true or not; alternative facts. “What should worry us most about today's AI mirrors, particularly generative large language models, is that they are hard-wired for bullshit... They are simply built to generate fact-like patterns of language - plausible statements that sound like what a person might say to a given prompt” (p. 120)
We therefore need “to reclaim our humanity”, as the subtitle says. We need to insist that the moral and the technical should go hand in hand instead of being separate. Vallor insists that institutions need to develop techno-moral expertise that looks forward. “If you're going to address the big global problems that threaten human existence, the current form of artificial intelligence is the worst thing you can do because it just repeats patterns that have led to the problems. The over-reliance on technical and economic progress - or what Ellul has called “the rule of efficiency” (p. 186) - is the main cause and therefore will not lead to the solution.
The solution is “pretechnical” and must be found in the way we want to be society - and the way we want to develop the technology. It's about starting from the common conversation with the needs we have as humans, which also includes the health and sustainability of the planet.
Vallor doesn't think this approach can be written off as naive idealism. When the world is ending, you can't call someone who points out the only possible solution to the problem.
The AI Mirror is an interesting and thought-provoking book. It is based on the premise that generative artificial intelligence is a giant mirror, which is technically true, and her critique of the economic rationality behind much technological development is deeply problematic both democratically and morally. But technology does not determine its use, as Vallor also points out, but could probably elaborate more. The mirror metaphor sometimes gets in the way of more nuances and reflections - and it also means a lot of repetition of the same points.
Furthermore, the author does not address the concept of emergence in the training of the major language models. It would have been interesting if she had included these notions in her treatment, as emergence here might mean that the large language models are less mirror-like and develop something new from existing patterns.
AI Automation | Software Engineer | Freelancer | Empowering Agency Owners to Deliver Projects Swiftly
4 个月Her insights on "machine thinking" challenge us to reclaim our values in this AI-driven world—it's a must-read for anyone considering the human side of tech.