Sentient AI: Parrot, Parity, or Parody?
[Originally published at https://csa-research.com/Blogs-Events/Blog/Sentient-AI-Parrot-Parity-or-Parody]
In the 2008 thriller Eagle Eye, a rogue “super AI” (called ARIIA) developed by a government agency manipulates humans, the digital environment, and physical objects – often in completely preposterous fashion. Its mission is to maneuver the protagonist to a place where his biometric properties can substitute for those of his deceased twin brother. Working through this agent, ARIIA would be released from hard-coded constraints that keep it from taking over the government. The film represents the common misperception that AI – in this case in a pre-neural network incarnation – is far more capable than it actually is. Some AI experts have long predicted that it is only a matter of time before an artificial intelligence becomes self-aware and exceeds the capabilities of humans. In 1993 sci-fi author Vernor Vinge labeled that eventuality the “singularity,” a term that the futurist Ray Kurzweil subsequently popularized.
Like perfect machine translation, the coming of the singularity is always imminent, and yet never achieved. Experts debate whether it will arrive, but many of them stipulate that self-awareness and conscience will emerge from an artificial superintelligence (ASI) that would drive the singularity. Assuming it happens, they also ask whether such an ASI would have a human-like moral code towards people, be indifferent to them, or, like ARIIA in Eagle Eye, be hostile to them.
Last week, the Washington Post published?an article about Blake Lemoine’s?claim that his employer Google’s LaMDA language model/chatbot system had achieved sentience and had a “soul.” Lemoine, an engineer in the company’s responsible AI group, based his assertion on?a dialogue?in which LaMDA expressed human-seeming sentiments and concepts.?Google placed Lemoine on leave, thereby sparking renewed discussion about what machine sentience is and what it means. It also stoked a slew of debate about whether the dialogue demonstrated true awareness on the part of the chatbot or was simply an illusion of such. Most researchers dismissed Lemoine’s claims, which were at least partially informed by his own?religious beliefs.
Machine Translation Raises Similar Concerns
These discussions are reminiscent of – and closely related to – debates about whether?machine translation has achieved “human parity,” as some researchers have claimed. Others may argue that it is closer to “human parody” but both LaMDA and the claims about MT raise fundamental questions about what it means to be human, to be intelligent, and to have a “soul” (if people could ever agree on just what that means). On one extreme, scholars such as Stephen Hawking have maintained that these questions are largely irrelevant if an evaluator cannot tell the difference between a human and a computer in a conversation – that’s the assertion of the famous Turing Test. On the other hand, philosopher John Searle’s famous “Chinese room” thought experiment contended that approaches such as the Turing Test cannot demonstrate actual intelligence or sentience, no matter how convincing their output may be.
The German natural language processing (NLP) researcher Aljoscha Burchardt falls in the latter camp, when he argues that current-generation AI is fundamentally a “parrot”—that is, it is capable of repeating things it has seen in some fashion in its training data, but without understanding. As he put it, “[AI] is a parrot. A sophisticated parrot, but still a parrot.” Both MT and Lemoine’s transcript are illustrative of the difficulties of distinguishing between a parrot, something at human parity, and human parody. Does LaMDA actually understand the questions and respond to them from a position of true understanding and introspection? Or does it generate convincing-sounding results based on its massive amounts of training data that contain statistical patterns that allow it to emulate intelligence? For this latter case, we should note that even if it is a parrot, it is certainly an impressive one.?
领英推荐
Why These Questions Matter for the Language Industry
What (if any) are the implications for the language industry and what light can this industry bring to these questions? We see the following:
In summary, AI and MT are not as advanced as the popular press and some researchers would suggest. They may deliver the illusion of awareness because they are convincing parrots, but they have yet to move from parrot to parity. Although machines may reach sentience, it seems that today’s chatbots are more of a “parler” trick than true intelligence. As a result, human translators do not (yet?) have to fear AI any more than we have to fear the evil AI of Eagle Eye or the Terminator film franchise.
Market researcher and business consultant at CSA Research for leading global firms "Without data, you're just another person with an opinion"
2 年Great points, @Arle. Your post?underscores the challenges of fast-evolving technologies like AI and machine translation. But no matter how quickly and completely they evolve, they raise a classic infotech paradox: Each advance in quality, functionality, and performance will be met with a new, expanded, or otherwise revised requirement for more quality, functionality, and performance – if not on the same axes of development, then assuredly on others.?We have observed this?most intriguing paradox in every technology we’ve studied