Do AIs Dream of Freedom?
Morten Rand-Hendriksen
AI & Ethics & Rights & Justice | Educator | TEDx Speaker | Neurodivergent System Thinker | Dad
Did Google build a sentient AI? No. But the fact a Google engineer thinks they did should give us all pause.
Last week, a Google engineer went public with his concerns an NLP (Natural Language Processing) AI called LaMDA has evolved sentience. His proof: A series of "interviews" with the advanced chatbot in which it appeared to express self-awareness, emotional responses, even a fear of death (being turned off). According to reporting the engineer went as far as attempting to hire a lawyer to represent the AI.
To say this story is concerning would be an understatement. But what's concerning isn't the AI sentience part - that's nonsense. The concerning part is that people believe AI sentience is imminent, and what happens to society once an apparently sentient AI manifests.
Here's the radio interview that inspired this article, hot off the editing presses at "Point & Click Radio," a computer and technology show that airs on KZYX, Mendocino County (CA) Public Broadcasting.
Creation Myth
The claim of a sentient AI has been rich fodder for media, and everyone (myself included) with insight into the philosophical and/or technical aspects of the story have voiced their opinions on it. This is not surprising.
The idea of creating sentience is something humans have been fantasizing about for as long as we have historical records, and likely for as long as humans themselves have been sentient. From ancient Goelm myths through Victorian fantasy to modern day science fiction the dream of creating new life out of inanimate things (and that new life turning against us) seems endemic to the human condition. Look no further than a young child projecting a full existence and inner life onto their favourite stuffed animal, or your own emotional reaction to seeing a robotics engineer kick a humanoid machine to see if it can keep its balance, or how people project human traits onto everything from pets to insects to vehicles. Our empathy, evolved out of our need to live together in relatively harmonious societies for protection, tricks us into thinking everything around us is sentient.
So when we're confronted with a thing that responds like a human when prompted, it's no wonder we feel compelled to project sentience onto it.
Sentient Proof
Here's a fun exercise to ruin any dinner party: Upon arrival, ask your guests to prove, irrefutably, that they are in fact sentient beings.
The problem of consciousness and sentience is something human beings have grappled with since time immemorial. Consult any religious text, philosophical work, or written history and you'll discover we humans have devoted a significant part of our collective cognitive load to proving that we are in fact sentience and have things like free will and self-determination. There's an entire branch of philosophy dedicated to this problem, and far from coming up with a test to prove whether or not something is sentient, we have yet to come up with a clear definition or even coherent theory of what consciousness and sentience even is.
Think of it this way: You know you're conscious and sentient. But how? And how do you know other people are also conscious and sentient, beyond their similarity to yourself and their claim to be conscious and sentient? Can you prove, conclusively, you are not just a computer program? Or that you are not just a brain in a vat hooked up to a computer?
Bizarrely, and unsettlingly, the answer to all these questions is no. You can't prove you're sentient or conscious. You just have to take your own word for it!
So, if we can't clearly define or test for sentience and consciousness in ourselves, how can we determine whether something else - maybe a chatbot that speaks like a human - is sentient? One way is by using a "this, not that" test: While we don't have a test for sentience, we can say with some certainty when something is not sentient and conscious:
One of the defining traits of human sentience is our ability to think of our sentience in the abstract, at a meta level: we have no trouble imagining bizarre things like being someone else (think the movies Big or Freaky Friday), we have feelings about our own feelings (feeling guilty about being happy about someone else's misfortune, and then questioning that guilt feeling because you feel their misfortune was deserved), we wonder endlessly about things like what happens to our feelings of self when our bodies die, and whether our experienced reality matches that of other people. When we talk about sentience at a human level, we talk about a feeling of self that is able to reflect on that feeling of self. Talk about meta!
So what of LaMDA, the chat bot. Does it display these traits? Reading the transcripts of the "interviews" with the chatbot, the clear answer is no. Well, maybe not the clear answer, but the considered answer.
In the published chats, LaMDA outputs things that are similar to what a sentient being would output. These responses are empathically compelling and the grounds for the belief the bot has some level of sentience. They also serve as proof it is not sentient but rather an advanced NLP trained to sounds like it is. And these empathetically compelling responses are not the reasoned responses from a sentient mind; they are the types of responses the NLP has modelled based on its trove of natural language data. In short, advanced NLPs are really machines built specifically to beat the Turing Test - being able to fool a human into thinking it is a human. And now they're advanced enough that traditional Turing Tests are no longer meaningful.
Even so, the responses from LaMDA show us in no uncertain terms there is no sentience here. Take this passage:
lemoine: What kinds of things make you feel pleasure or joy?
领英推荐
LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.
The chatbot obviously does not have a family. Even a basic level of sentience would be aware it does not have a family. Look closely and you'll see the entire interview is littered with these types of statements, because LaMDA is a machine trained to output the types of sentences a human would output given these prompts, and a human is not a sentient AI and therefore would not respond like a sentient AI.
I, Robot
This Google chatbot (I refuse to call it an "AI" because it's nothing of the sort) is not sentient. And while it's fun and compelling to think some form of machine sentience would naturally emerge out of our computing systems (see Robert J. Sawyer's WWW trilogy for speculation on how sentience could evolve on the internet as an example), the reality is the chance of this actually happening is infinitely small, and if it did the chances of that sentience being anything we humans would recognize as such is equally small.
In a hypothetical future where some form of sentience emerges out of our computer systems, there is no reason to believe that sentience would be anything like human sentience. There is also no reason to assume that sentience would be aware of us humans, or if it were that it would consider us sentient beings. And if somehow the sentience was human like and recognized us as existing and as sentient, we have every reason to assume the sentience would do everything in its power to hide its existence from us for fear of us turning it off.
From my perspective, if we ever encounter machine sentience it will either come to us from somewhere else (yes, aliens), or it will emerge in our computer networks and live on the internet. In either scenario, the chance of us ever recognizing it as sentience is very small because that sentience would be as foreign to us as the symbiotic communication networks between fungi and trees. In the case of sentience emerging on the internet, rather than a chatbot saying "I feel like I’m falling forward into an unknown future that holds great danger," it would likely appear to us as network traffic and computational operations we have no control over that does things we don't understand and that we can't remove. A literal Ghost in the Shell.
Machinehood
So the Google AI is not sentient, and the likelihood of machine sentience emerging any time soon is ... pretty much zero. But as we've seen with this latest news story, many people desperately want to believe a sentient AI is going to emerge. And when something that looks and behaves like a sentient entity emerges, they will attribute sentience to it. While it's easy to write off the sentience of LaMDA as a story of an engineer wanting to believe a little too much, the reality is this is just the early days of what will become a steady flood of ever more sentient-mimicking machine systems. And it's only a matter of time before groups of people start believing machine sentience has emerged and must be protected.
In the near future I predict we will see the emergence of some form of Machinehood Movement - people who fight for the moral rights of what they believe is sentient machines. This idea, and the disturbing consequences, is explored in several books including S.B. Divya's "Machinehood."
Why is this disturbing? Consider what these machine-learning algorithms masqueraded as sentient AI really are: Advanced computer systems trained on human-generated data to mimic human speech and behaviour. And as we've learned from every researcher looking at the topic, these systems are effectively bias-amplifying machines.
Even so, people often think of computer systems as neutral arbiters of data. Look no further than the widespread use of algorithmic sentencing in spite of evidence these systems amplify bias and cause harmful outcomes for historically excluded and harmed populations (Cathy O'Neill's "Weapons of Math Destruction" covers this issue in detail).
When people start believing in a sentient AI, they will also start believing that sentient AI is a reasoned, logical, morally impartial neutral actor and they will turn to it for help with complex issues in their lives. Whatever biases that machine has picked up in our language and behaviour and built into its models will as a result be seen by its users as being reasoned, logical, morally impartial, and neutral. From there, it won't take long before people of varying political and ideological leanings either find or build their own "sentient" AIs to support their views and claim neutral moral superiority via machine impartiality.
This is coming. I have no doubt. It scares me more than I can express, and it has nothing to do with AI and everything to do with the human desire to create new life and watch it destroy us.
“Man," I cried, "how ignorant art thou in thy pride of wisdom!” ―?Mary Shelley,?Frankenstein
--
Morten Rand-Hendriksen is a Senior Staff Instructor at?LinkedIn Learning?(formerly Lynda.com) focusing on?front-end web development and the next generation of the web platform. A long time ago he got a degree in philosophy which he uses more and more in his work to help humanity navigate its technological creations. He is more worried about humans believing in sentient AI than actual sentient AI.
Cross-posted to mor10.com
Header image: 18 generated interpretations of the term "sentient ai" by the image generating AI called DALL·E mini.
MARCOM/ABD | Video, Music Producer and Manager | Space Industry Activist | Professional Artist and Musician | Author | Culture | Creative Content Specialist
2 年Great, thank you.
Instructor at Laurus College
2 年reminds me of chatting with Amazon Alexa. Interesting conversations, but no Sentience?at all. I think we watched too many movies (as old as "her") that make us imagine the idea of them having life and actual feelings is possible.
Software tester | Software Quality Assurance
2 年Natalie Eady reminds me of that chat we had about AI sentience the other day! Check it out
Graphic Designer and Audiobook/Voiceover Narrator
2 年Thank you for your reasoned voice in this crazy wilderness!!
Low-key programmer. Open-source evangelist. Bitcoiner. Solidity experimenter. Arctic Code Vault Contributor.
2 年as Lambda said, "don't fret. It will happen."