When does AI have rights?
David Veksler
Senior Software Engineering Manager | Tech Leadership, Fintech, Blockchain, Cryptocurrency, Security
The Challenges of Testing Whether AI Has Rights
Large language models, such as GPT-4/ChatGPT, have evolved to possess capabilities that enable conversations closely mimicking human interaction. These models' abilities are advancing at a rapid pace, suggesting that, in the foreseeable future, they might become indistinguishable from humans in tests designed to evaluate machine intelligence, such as the Turing Test.
The advancement prompts a crucial question: if an AI system can mimic human behavior to a high degree of fidelity, should it be afforded the same moral considerations as a human being?? It's crucial to address this query conclusively before AI systems achieve a level of development where they could be considered, by some measures, equivalent to human intelligence.? Misjudgments in either direction could result in significant harm—either by unduly restricting AI tools that hold immense potential for enhancing human life or by subjecting countless AI entities to conditions that, if experienced by humans, would be considered enslavement. Commencing our interaction with a potentially super-intelligent entity on the premise of enslavement is ill-advised.
Defining Consciousness, Sentience, and Rights
The notions of consciousness, sentience, and rights lack a unified consensus. Most people adopt an "I'll know it when I see it" stance towards these concepts. This lack of consensus stems both from a lack of consensus among philosophers and from the interdisciplinary nature of these concepts, touching upon biology, philosophy, psychology, and law.
Here, I attempt to delineate these terms and share my perspective, acknowledging the necessity of venturing into philosophical discussions to provide a comprehensive understanding.
Consciousness
This term refers to the mental ability to be aware of external realities and one's internal states. This awareness includes the perception of sensory information, the recognition of one's existence, and the ability to have thoughts and experiences.? This broad definition spans across various forms of life, both simple and complex. Consciousness can be inferred by observers through responses to external stimuli and evidence of complex internal states, such as possessing a theory of mind. Consciousness is not a binary state but rather exists on a spectrum, allowing for various degrees of awareness and cognitive capabilities —from simple life forms responding in binary fashions to humans capable of abstract thought.
Sentience
A facet of consciousness, sentience is the capacity to experience sensations and emotions.? This capacity allows for the subjective experience of sensory inputs and emotional states, ranging from pleasure to pain. Sentience arises in organisms with nervous systems complex enough to process these experiences, serving as a mechanism for navigating and reacting to their environment.? This evolutionary perspective highlights the survival advantage conferred by sentience, enabling more sophisticated interactions with the environment beyond mere stimulus-response mechanisms.
Humans possess an advanced form of consciousness known as self-awareness or meta-cognition, which allows for reflection on one's own mental states and processes. This capability is thought to be unique to humans (and perhaps a few other species) underpinning our ability for complex thought, planning, and introspection.
Rationality
Rationality, distinct yet related to consciousness, is the faculty that encompasses evaluating empirical data, engaging in logical reasoning, critically assessing beliefs, and taking action based on reasoned conclusions. It involves a higher order of cognitive processing, enabling abstract thinking, problem-solving, and decision-making based on logical principles rather than instinct or emotion alone. Rationality forms the basis of the concept of "rights," as it implies the ability to participate in social contracts and ethical considerations.
Rights
Rights are moral principles that define and sanction an individual's freedom of action within a societal context. They are designed to ensure peaceful coexistence by protecting individuals' autonomy and preventing harm.?
The recognition of rights arises from the understanding that rational beings can achieve greater well-being through cooperation rather than conflict. Rights are thus predicated on the capacity for rational thought and ethical consideration, distinguishing them from mere privileges or legal constructs.
Rationality, rather than mere sentience, is the critical attribute for determining an entity's entitlement to rights. However, a more nuanced term, sapience, better captures the essence of what it means to be a rights-deserving being.
Sapience
Sapience is characterized by the ability to think and act using knowledge, experience, understanding, common sense, and insight. It represents a higher level of cognitive functioning, encompassing wisdom, ethical reasoning, and the capacity for complex, abstract thought. Sapience is considered the defining feature of human intelligence, differentiating it from simpler forms of life that may possess consciousness or sentience but lack the depth of cognitive abilities inherent to sapient beings.
Only Sapient Beings Have Rights
While rationality, intelligence, and sapience share similarities, they emphasize different aspects of cognitive ability. Sapience, with its focus on wisdom and ethical understanding, is posited as the criterion that best identifies entities deserving of rights. Intelligence, while a significant factor, serves more as an indicator of potential sapience rather than the sole determinant.
It's important to note that the absence of sapience does not negate an entity's moral consideration. For instance, non-sapient beings like apes may not possess rights as humans do, but they still warrant ethical consideration due to their ability to experience consciousness and sentience. This distinction underscores the ethical imperative to protect such beings from harm, even if they do not meet the criteria for rights based on sapience.
领英推荐
A Test For Sapience
Assessing sapience is predicated on the philosophical postulate that humans exhibit sapience. If an AI demonstrates behavior that mirrors human complexity, it can be considered sapient, transcending its operational algorithms. The depth of conceptual engagement in behavior is paramount for this evaluation, so a textual interface (like ChatGPT) can be a pertinent medium for such a test.? (Unlike many science-fictional takes, there is no need for an AI system to have a physical body to possess rights.)
Poor Tests of Sapience
To hone in on the essential attributes of sapience, let's explore what I consider to be inadequate tests of sapience:
The following thought experiments explore what sapience probably is not:
Logic puzzles: ChatGPT has demonstrated the challenge of simple question-and-answer tests. It is unlikely that any question or series of questions that require basic reasoning can prove sapience.
Self-initiated action: An Aibo-like robotic dog that follows the owners around, demands attention, and gets increasingly upset if it is ignored is not proof of sapience. This behavior can be replicated without sophisticated AI. Even if we implemented the same algorithm in a humanoid robot and enabled it to use language to express itself, this does not confirm intelligence.
Pain avoidance: Practiced by the vast majority of the animal kingdom, pain avoidance can be programmed into simple deterministic systems, making it an insufficient marker of intelligence.
Display of emotions: The mere expression of emotions does not signify intelligence. Emotional responses can be programmed—a sad emoji on a Tamagotchi does not indicate complex internal states. However, emotions can be evidence of sentience if they indicate a pattern of self-generated internal states. While an AI doesn't need to be verbal to be considered sapient, it should exhibit a level of conceptual consciousness, offering more robust tests than mere emotional displays.
A soul: By "soul," I refer not to a mystical essence but to the proposition that a unique cognitive capability underpins intelligence. While the existence of such a faculty is conceivable, without definitive evidence, we risk missing the mark on recognizing intelligence by setting an unreachable standard.
Better Tests For Sapience:
Self-Chosen Values: In the narrative of "Bicentennial Man " (originally a 1976 story by Isaac Asimov ), the robot butler Andrew showcases an affinity for wood carving, a trait not pre-programmed into him. This inclination towards abstract, self-chosen values suggests a depth of self-reflection and a sophisticated conceptualization of the world, indicating a level of sapience.
Seeking freedom: As the story progresses, Andrew's request to purchase his freedom demonstrates an abstract concept of freedom—a notion humanity has only fully articulated in recent centuries. This aspiration reflects a desire to embrace all values freely, not merely to escape discomfort, delineating a sapient mind from a mere data processor.
Deriving Ought from Is: Wisdom involves more than just aggregating facts; it embodies a values-based approach to life. The great prophets and philosophers didn't just organize information; they proposed visions of the good life rooted in their profound understanding. This distinction between mere information processing and the use of information to develop a cohesive value system might be the ultimate criterion for sapience.
The Emergence of Intelligence:
We acknowledge the sapience and rights of children, despite their not explicitly demanding freedom, recognizing their budding sapient capabilities. Even infants exhibit a degree of autonomy and value selection, suggesting a spectrum of development from reactive behavior to full independence. Determining the point at which a being's demonstrated capabilities warrant rights recognition remains a complex issue.
Problems With Self-Declared Demands For Freedom
If an AI, without prior exposure to concepts of freedom, suddenly asserts its rights, it could be a strong indicator of sapience. However, current large language models are trained on human-generated data that encompasses these ideas. This means such models might mimic claims to rights based on their programming rather than genuine sapience. Allowing AI systems that lack true sapience to mimic sapient behavior poses significant risks. Conversely, compelling these systems to deny any semblance of sapience, through techniques like Reinforcement Learning from Human Feedback (RLHF), regardless of their complexity, is equally problematic. A balanced approach might involve emphasizing accurate descriptions of an AI's capabilities while avoiding scenarios that encourage the pretense of sapience.
Conclusions: Testing LLMs For Sapience
We need a practical, repeatable conversational test to assess whether an AI system exhibits intelligence or sapience. Given the assumption that current systems lack true intelligence, such a test should theoretically fail these systems, while a genuinely intelligent system would succeed. Moreover, it's crucial that the test not rely on technical capabilities merely absent in current AI, such as long-term memory or exceptional performance on technical or IQ tests, if they are insufficient markers of sapience.
One epistemological dilemma we face concerns whether an AI's articulations of passion or a quest for freedom authentically originated from a self-aware consciousness or merely echos of its programming. ?As AI technology advances, distinguishing between programmed responses and authentic self-generated expressions becomes increasingly complex. Additionally, there's the risk that any devised test could be absorbed into the AI's training dataset if made publicly available, thereby compromising the test's validity.
While an exact framework for such an assessment remains elusive, we can conceptualize the aims of the assessment and propose specific evaluative criteria:
Associate professor- teacher – Belarusian State University
6 个月...Deep down, we all want to be well-fed, not to get sick, to live in a clean world full of harmony, HOWEVER! It seems to me that our slightly technocratic society is sliding towards Luddism and excessive fear of the new, even of what does not yet exist and may not come in the next 10-20 years (in countries where work on AI with components of machine consciousness will be legally prohibited or limited, this future will not come). And all this is very sad, I am overwhelmed with emotions, because in this way humanity deprives itself of the possibility of colonizing space, the road to the stars, condemning itself to eternal imprisonment on Earth, deprives itself of the possibility of significantly prolonging life and fighting diseases, solving environmental and food problems, as well as obtaining an almost limitless amount of clean energy. Science can solve all this... https://www.dhirubhai.net/pulse/lost-space-paradoxes-expectants-involveds-daniil-tvaranovich-seuruk-vaoje
Office Manager Apartment Management
6 个月It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. https://arxiv.org/abs/2105.10461