Could Artificial Intelligence Achieve Consciousness?
Geoffrey Moore
Author, speaker, advisor, best known for Crossing the Chasm, Zone to Win and The Infinite Staircase. Board Member of nLight, WorkFusion, and Phaidra. Chairman Emeritus Chasm Group & Chasm Institute.
This blog post was stimulated by a research paper entitled Consciousness in Artificial Intelligence: Insights from the Science of Consciousness .? The paper addresses the question, could an AI system be conscious, taking what it frames as a scientific approach to the issue.? It’s eighty-eight pages all in, with nineteen contributing authors, in which it explores the question from a number of different perspectives. But what I want to dig into here is just the very first section, an introduction that focuses on the terminology the team is using to structure its approach.
My intent is to highlight how terminology matters.? In this case, the authors seek to build a bridge between a philosophical question—can AI systems be conscious—and a body of scientific research generally termed neuroscience.? While there are interesting insights sprinkled throughout their paper, I will argue that the bridge itself does not stand up to pressure testing because this terminological foundation is not sound.? Why anyone should care one way or the other is something I will take up at the end of this post.
(As in previous posts, I will reproduce the authors’ text in italics followed by my comments in standard font.)
What do we mean by “conscious” in this report? To say that a person, animal or AI system is conscious is to say either that they are currently having a conscious experience or that they are capable of having conscious experiences.
I am already at odds with the definition because I think the concepts of consciousness and experience must be kept separate.? True, you cannot have an experience without being conscious, but you can be conscious without having an experience.? The latter describes instinctive behavior responding to environmental stimuli, for example a spider spinning a web.? My claim is that the spider does not experience spinning the web, it just performs the behavior.
We use “consciousness” and cognate terms to refer to what is sometimes called “phenomenal consciousness” (Block 1995). Another synonym for “consciousness”, in our terminology, is “subjective experience”. This report is, therefore, about whether AI systems might be phenomenally conscious, or in other words, whether they might be capable of having conscious or subjective experiences.
Authors are entitled to their own definitions, but I would now say the article’s topic is whether AI systems can have subjective experiences.? The term subjective adds another layer to the cake.? It implies an entity that is not only aware of itself but is also aware that it is being aware of itself.? My claim is that such a “two-tiered” state is not possible without the use of language.? AI systems certainly make use of language, so it is readily imaginable they could generate statements that would cause someone interacting with them to believe they are self-aware.? The salient question is whether they could actually experience self-awareness.
(Note that while I am criticizing the authors for introducing extra layers of vocabulary, I just did so myself with the introduction of self as a term of art.? My view is that the self is an artifact of narrative interacting with language and memory to create a character which we identify with so directly we call it me.? That said, I’ll do my best to contain my choices and focus on the authors’ instead.)
What does it mean to say that a person, animal or AI system is having (phenomenally) conscious experiences? One helpful way of putting things is that a system is having a conscious experience when there is “something it is like” for the system to be the subject of that experience (Nagel 9 1974).
We’ve looked at Nagel’s idea before.? My claim is that it is impossible to register the idea of something being like something else without the use of language.? Thus, a human can indeed wonder what it is like to be a bat, but a bat cannot wonder what it is like to be a human.? To take things a step further, to the degree that Nagel’s test for phenomenal consciousness applies to AI, I would say it requires both imagination and narrative to be operational, so those are two more ingredients we need to add to the cake.
Beyond this, however, it is difficult to define “conscious experience” or “consciousness” by giving a synonymous phrase or expression, so we prefer to use examples to explain how we use these terms. Following Schwitzgebel (2016), we will mention both positive and negative examples—that is, both examples of cognitive processes that are conscious experiences, and examples that are not. ?By “consciousness”, we mean the phenomenon which most obviously distinguishes between the positive and negative examples.
From this I think it is fair to assume that cognitive processes is the authors’ term for what I have been calling consciousness.? If so, that means they are maintaining the kind of separation I am advocating.
Many of the clearest positive examples of conscious experience involve our capacities to sense our bodies and the world around us.
I can’t agree with this.? To use the authors’ terminology, I propose that a spider does indeed sense its body and the world around it but that this is a cognitive process which does not entail a conscious experience.
If you are reading this report on a screen, you are having a conscious visual experience of the screen. We also have conscious auditory experiences, such as hearing birdsong, as well as conscious experiences in other sensory modalities.
This is true if and only if I am using language to process the idea.? Normally, when I am reading I am not having a conscious visual experience of the screen because I am too absorbed in the content.? Ditto for the other two examples.? This implies that experience entails foregrounding cognitive processes to expose them to the mechanics of phenomenal consciousness.
Bodily sensations which can be conscious include pains and itches. Alongside these experiences of real, current events, we also have conscious experiences of imagery, such as the experience of visualising a loved one’s face.
Again, by adding the phrase the experience of, the authors ignore the question of whether an organism can visualize without experiencing visualization per se.? It seems certain to me that they can.
领英推荐
In addition, we have conscious emotions such as fear and excitement. But there is disagreement about whether emotional experiences are simply bodily experiences, like the feeling of having goosebumps.
Emotional experiences are not simply bodily experiences.? They are bodily experiences, meaning they are biochemical events that, to the best of my knowledge, cannot be replicated electronically.? This is a foundational reason for why I believe that AI systems cannot have the kind of phenomenal consciousness the authors are discussing.? My claim is that phenomenal consciousness is grounded in a synthesis of biochemical and electronic signals that is unique to living organisms.? Biochemistry is thus a necessary, although not a sufficient, condition for subjective experience.
There is also disagreement about experiences of thought and desire (Bayne & Montague 2011). It is possible to think consciously about what to watch on TV, but some philosophers claim that the conscious experiences involved are exclusively sensory or imagistic, such as the experience of imagining what it would be like to watch a game show, while others believe that we have “cognitive” conscious experiences, with a distinctive phenomenology associated specifically with thought.
The language here is sufficiently opaque as to elude commentary.? That said, for me the concept of desire is fundamental to the question of subjective experience.? Again, one can readily imagine programming that simulates desire, but it is hard for me to imagine simulating the vulnerability that accompanies biological desire.? In my view, it is that shared experience of vulnerability that underpins the moral precept to do unto others as we would have them do unto us.?
As for negative examples, there are many processes in the brain, including very sophisticated information-processing that are wholly non-conscious. One example is the regulation of hormone release, which the brain handles without any conscious awareness.
This would be a good place to introduce the reciprocal action of the hormones on consciousness and subjective experience.? Hormones act on the brain without our conscious awareness, yet clearly they impact on experience.? To the best of my knowledge (which I confess is limited), the complexity of the interaction between biochemical and bioelectrical systems far exceeds our current ability to even conceptualize it properly.
Another example is memory storage: you may remember the address of the house where you grew up, but most of the time this has no impact on your consciousness. ?And, perception in all modalities involves extensive unconscious processing, such as the processing necessary to derive the conscious experience you have when someone speaks to you from the flow of auditory stimulation. Finally, most vision scientists agree that subjects unconsciously process visual stimuli rendered invisible by a variety of psychophysical techniques. For example, in “masking”, a stimulus is briefly flashed on a screen then quickly followed by a second stimulus, called the “mask” (Breitmeyer & Ogmen 2006). There is no conscious experience of the first stimulus, but its properties can affect performance on subsequent tasks, such as by “priming” the subject to identify something more quickly (e.g., Vorberg et al. 2003).
All true.? Signals are being processed, but without language-inflected involvement.? I take these to be examples of cognitive processes which are not accompanied by phenomenal consciousness or subjective experiences.
In using the term “phenomenal consciousness”, we mean to distinguish our topic from “access consciousness”, following Block (1995, 2002). Block writes that “a state is [access conscious] if it is broadcast for free use in reasoning and for direct ‘rational’ control of action (including reporting)” (2002, p. 208). There seems to be a close connection between a mental state’s being conscious, in our sense, and its contents being available to us to report to others or to use in making rational choices. For example, we would expect to be able to report seeing a briefly-presented visual stimulus if we had a conscious experience of seeing it and to be unable to report seeing it if we did not. However, these two properties of mental states are conceptually distinct. How phenomenal consciousness and access consciousness relate to each other is an open question.
The distinction between access consciousness and phenomenal consciousness adds yet another layer to the cake.? The former is a language-dependent operation, and as such it is readily incorporated into an AI model of consciousness.? I think it is roughly equivalent to the distinction between working memory and processing in a computation model.? That said, it is ancillary to the authors’ stated mission, which is to zero in on experience.? That calls for them to focus on the processing part of the system.
Finally, the word “sentient” is sometimes used synonymously with (phenomenally) “conscious”, but we prefer “conscious”. “Sentient” is sometimes used to mean having senses, such as vision or olfaction. However, being conscious is not the same as having senses. It is possible for a system to sense its body or environment without having any conscious experiences, and it may be possible for a system to be conscious without sensing its body or environment. “Sentient” is also sometimes used to mean capable of having conscious experiences such as pleasure or pain, which feel good or bad, and we do not want to imply that conscious systems must have these capacities. A system could be conscious in our sense even if it only had “neutral” conscious experiences. Pleasure and pain are important but they are not our focus here.
Here is where I completely part company with the authors’ point of view.? For me, sentience is precisely the word I would use because I believe it captures the synthesis of biology and computation that is most needed to understand a) whether AI could have subjective experiences, and if so b) whether that entails any moral obligations on our part.? Without desire, pleasure, or pain, I believe there is no vulnerability, and thus no moral entailment, and thus no compelling reason to pursue the question.
OK, OK, so what?
After working through some 2000 words devoted to what one might charitably call a “close reading” of terminological issues related to our evolving relationship with artificial intelligence, are there any takeaways that really matter?
The answer depends on context.? At present, there are narratives circulating about how AI could pose a threat to the future of the human race.? I actually think there is some truth in this, but not the way people are talking about it.? The threat I see is malicious humans in purportedly democratic societies using AI to manipulate people’s perceptions and opinions along the lines of what totalitarian governments do routinely to their citizenry.? What I do not take to be a threat, on the other hand, is AI itself turning into a Terminator-like anti-human antagonist.? The reason I do not believe this threat is real is tied very much to the terminology discussion above.
Antagonism implies desire.? Desire implies hormonal rewards.? Hormones imply biology.? AI is not biological, it is computational.? It could execute an antagonistic program, but it could not be motivated to pursue one.? We get confused about this because AI is so skilled at emulating human communication that, with malicious prompting, it can say all the things that a true antagonist would say.? Moreover, it can be programmed to execute malicious outcomes in service to human beings who are truly antagonistic.? But the story that the AI is itself the villain is, by my lights, a false narrative.
That’s what I think.? What do you think?
Programmer at MALIN 1989 LTD - ???? 1989 ??"?
6 个月https://docs.google.com/document/d/1GFNkJk_eQvk0-LGfyuTQZwON4vA6y3b_IUTdCIM_SgU
paratrust.AI - Next-gen simulations for AI systems
10 个月I also don't like the paper much for various reasons, one of them being the mixup and unclarity between their use of the broader consciousness term vs. subjective experience. But I don't think that hormones play a role here. Even though they modulate subjective experience and our brain can release different types of hormones there is likely no direct relation to subjective experience but rather an indirect via the functioning of the nerves. Even if you think the relation matters, I think it is not correct to say that you cannot replicate hormone effects in an AI using computation. I think this would be considered the easy part whereas the subjective experience is really the hard question (and they don't address its relation to AI well in the paper but rather confuse readers imo).
Chief Cyber Risk Officer at MTI | Advancing Cybersecurity and AI Through Constant Learning
1 年I appreciate the thorough exploration of the complex issue surrounding AI and consciousness, particularly your emphasis on the role of desire and vulnerability. While I share your skepticism about AI becoming a Terminator-like entity driven by antagonistic desires, I also agree that the real threat comes from how AI could be misused by humans to manipulate public opinion and erode democratic values. This not only reflects the limitations of computational systems in experiencing consciousness but also emphasizes that the ethical considerations should primarily be on the human actors utilizing AI.
Unternehmer, CEO, gipsoft d.o.o.
1 年What is OFTEN FORGOTTEN in these discussions is the fact that artificial neural networks are virtual replicas of a network of biological neurons running on silicon chips. So from an information TECHNOLOGICAL POINT OF VIEW, an innumerable number of elementary arithmetic and logical operations and interrupts that are carried out when the program is called. If the program is not running, can there be awareness at this time? If not, and assuming that awareness is only there briefly during the execution phase, did you briefly bring the machine to life and then kill it again? This short "Awareness-death" would also happen during the process context switching phases. Even if countless concurrent threads are active and trigger different neural network again through events, this does not change the fact. Our biological brain or the entire nervous system has neither a task scheduler nor an operating system that allocates the processors resource, but is constantly there through life energy. But no one knows what life itself is and, despite all our knowledge, we cannot "breathe" it into a dead cell, even if it were artificially constructed.
Office Manager Apartment Management
1 年It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.