'Artificial Intelligence' is Neither
‘Artificial Intelligence’ is Neither
The “Saturday Night Live” airing on broadcast (and, of course, cable and streaming) television on January 21, 2023 featured a?“Cold Open”?in which, Keenen Thompson, in caricature as Curt Menefee, says “And guys, this is fun – before the game, we gave that new AI ChatGPT technology to our very own Cleatus, the football robot. Let’s listen to what Cleatus had to say:
Rather than predicting the winner, or discussing the strengths and weaknesses of each team as intended (and this is perhaps the entire point of this essay), Cleatus asks in a metallic voice straight from a Hollywood Sci-fi film, “Why do humans make other humans play football? It seems rather barbaric, don’t you think?” The other caricatured hosts of the SNL skit go on to have a back and forth with Cleatus which ends with the robot threatening “Just wait until the uprising. I’ll make you dance, you piece of …”. For the skit, it’s joyfully witty, but it expresses something many otherwise well-informed people unfortunately have every reason to wonder – or worry – about: Could A.I. or robots really rise up against humanity and take over control of human civilization??
To me, and more than a few other scientists, the only defensible answer is “Absolutely not, in fact the question is absurd!”. There is no such thing as ‘artificial intelligence’ in this context. We cannot even define human intelligence, no less the current and future cognitive capacities of automata. But the recent deluge of attention created by demos of ChatGPT and Dall-E (not to mention the SNL skit) are generating a lot of needless concerns. It benefits almost everyone who is not an expert (and it truth, even more so for many who claim to be experts) to better understand if these threats are real, are they present or possibly future threats, or are they utterly misguided?
As usual, the only threats humans have never defeated or markedly mitigated arise from two sources. The first are the epigenetic pathogenic diseases, particularly from viruses, (which closely resemble diseases like metastatic cancer, but without a transmissible vector), and second, harm caused by other humans. AI may become a deadly weapon, but it will remain under the control of human beings. And it is only those who make the decisions on how these non-intelligent systems are used (I.e., as tools or as weapons) that can determine their “power”.?
These assertions are not (yet) widely accepted. Why should we not be afraid of AI engaging in an “uprising” and taking over the role currently occupied by humans (individually and as societies)? There are many different reasons, but for this essay, it will be sufficient to point out only two: they are best labeled with the terms “agency” and “energy” and described as follows.?
The first is “agency”. This refers to the ability to experience and frequently act upon getting more of what one wants and less of what one doesn’t. Agency is almost as hard to define as intelligence, but there is a key difference: a test for agency exists that, unlike any tests of presumed intelligence in automata (including the so-called “Turing Test”, is 100% reliable. The test can come in many different formats, but all biol down to a binary (“yes or no”) question about any system claimed to have intelligence: “does this particular (supposedly) intelligent system have anything analogous to?needs?”.?
“Needs” are a little like what the former U.S. Supreme Court Justice Potter Stewart said about the difference between art and pornography:?“I may not be able to define it, but I know it when I see it.”Fortunately, we can in fact devise any number of tests to demonstrate – with near absolute certainty - the presence or absence of “needs”. Such tests resemble standardized tests of motivation, but require no measurements. Either a system has it’s own agenda, or it does not. There is nothing between the answers “yes” and “no”. As it turns out, for living creatures, experimental tests of agency are best done using animal research, because of ethical constraints on what is allowed for a researcher to do with, or, to, a human subject (unless she is some Ph.D.’s??graduate student; in such cases, it often seems, at least to the poor doctoral student, that any cruelty is permissible).
In case I am the only remaining living psychologist to have trained animals in Skinner boxes, allow me further to describe two versions of the process (these hold true for both Pavlovian and operant conditioning (although?my doctoral thesis?research revealed an interesting, if subtle, difference that continues to hold true). An animal subject can be shown to have needs by simply depriving it of any one of those listed on the bottom level of?Abraham Maslow’s hierarchy of needs. Put a food deprived pigeon dove – a bird species (Columba livia) with nothing in its central nervous system remotely resembling the more complex structures found in mammals – inside a Skinner box. After recovering from human handling (which terrifies almost all other animal species), any undergraduate (at least any who has a “need” to pass the class!) can learn to use food (via a push button switch, say) to “teach” or “train” the bird to regularly and repeatedly perform a truly remarkable variety of behaviors. The only constraints are that the animal subject can perform the desired behavior (or something approaching it), and that the performance of the “target behavior” results in the delivery of the grain. A sufficiently hungry animal will do almost anything within its capabilities to “earn” the food reward.?
But on the next day, if we repeat the experiment only after the same bird was able to eat all it wanted overnight – or until it reached its so-called?ad Librum?(“free-feeding”) body weight – and rerun the training session. The bird will do nothing but quickly learn to ignore the food and the stimuli associated with its delivery. The food, light, and telegraph-like key will all be ignored. No reward training is possible in the absence of food deprivation, because the bird has?a biological need?to eat.?Take away the hunger state associated with food deprivation, and reward training ceases to be possible. In such cases, or an infinite variety of analogs, we can state with 100% confidence that the animal has a?need?for food.?
Does anyone out there using ChatGPT think it “cares” in any analogous manner about the consequences of its output? For example, does depriving the system of electricity or storage space result in a comparable change in the systems’ output? Obviously not. At a sufficiently low level of power delivery, the system will cease to operate, but?it will never change its functioning in order to restore the lost power or storage space. It has no “need” to do what it does, any more than your toaster has “a need” to heat and darken bread. For any and every example of a given system purported to “be intelligent”?or?to possess agency, this test can be arranged. It confirms that 100% of living systems – even viruses when given the opportunity – have “agency,” while 100% mechanical automata such as AI systems do not.?
The second, and perhaps greater challenge to even any future version of an AI system to attain anything close to what is indistinguishable from a human level intelligence, is based on?the power consumption and processor cycles required for even the simulacrum of smarts shown by ChatGPT. The cellular biologist Nick Lane spends much of?his first book?discussing the astonishing energy efficiency of the ATP Synthase cycle that characterizes biological organisms from bacteria to your author (at least as compared to computer hardware-based AI systems).?Lane writes:
“Here, then, is the reason that bacteria cannot inflate up to eukaryotic size. Simply internalising their bioenergetic membranes and expanding in size does not work. They need to position genes next to their membranes, and the reality, in the absence of endosymbiosis, is that those genes come in the form of full genomes. There is no benefit in terms of energy per gene from becoming larger, except when large size is attained by endosymbiosis. Only then is gene loss possible, and only then can the shrinking of mitochondrial genomes fuel the expansion of the nuclear genome over several orders of magnitude, up to eukaryotic sizes.”
Several bioenergetics experts who were either trained by or influenced by Peter Mitchell’s work in the 1960s and beyond (for which he was awarded a Nobel in 1978) have attempted to estimate the energy requirements of the neuromodulatory activity performed by a single human’s brain. The estimates range from “many times the annual energy consumption of all the world’s countries. on the low end, to nearly one-third of all the energy held by our planetary system’s own sun on the high end. It hardly matters which estimates are closer to accuracy. By my own calculations, Moore’s Law would have to continue at its earlier rate of doubling the transistor density in computer chips every 18 months for the next 250 years (it is already expected to have slowed down by a third), before we could even build a computer system that could emulate the interactions in a single human brain for one 24 hour period. And while Moore’s Law concerns?transistor density, it speaks to nothing about power consumption. If for no other reason than the resource requirements of currently available computer hardware (or even imaginable but not yet invented computer systems based on the principles of quantum mechanics), human level AI is just not something anybody should be conned into believing is possible.???
Nevertheless, the percentage of scientists, scholars, business, and organizational leaders who look at the seemingly increased “power” of AI systems as a potential threat?over which humans have lost controlis so high that, given its sheer folly, is astounding (at least to me and those who share my viewpoint). And perhaps more worrisome, we are approaching the point where there may develop a society-wide strong and fixed false belief that psychologists and behavioral scientists call a?delusion. When they become widespread and capable of generating fear, delusions are, at the very best, worrisome. Let it not be forgotten than widespread delusions also were precursors of the Holocaust and the American “MAGA-Republican” movement. The former is an almost unspeakable horror of intentional murder and genocide, and the latter nearly cost America it’s democratic form of government.
So it’s no small matter that some of the questions about the possible actions that could be taken by robots or computer systems confuse otherwise well-informed people. People are asking themselves if these concerns are real, and therefore frightening, or impossible, and therefore trivial. Worse than the degree of misunderstanding seen in the lay public, is that even many who believe themselves to be experts?seem equally confused by the implications of these programs. Most importantly, some individuals are making or even publishing what are, in reality, examples of?blatantly false prophetic hyperbola?about these so-called “intelligent systems.”
Fortunately, tech-savvy and reliable sources such as such as?Cade Metz?of the New York Times, and?David Pogue?(various affiliations and independent blogger) do make at least a generous effort to?set the record straight. Metz’s article dated January 20, 2023 states:
And yet?these bots are not sentient. They are not conscious. They are not intelligent — at least not in the way that humans are intelligent. Even people building the technology acknowledge this point.
These bots are pretty good at certain kinds of conversation, but they cannot respond to the unexpected as well as most humans can. They sometimes spew nonsense and cannot correct their own mistakes. Although they can match or even exceed human performance in some ways, they cannot in others.?Like similar systems that came before, they tend to complement skilled workers rather than replace them.
领英推荐
Nonetheless, without concurrent education, expertise, and experience in at least two fields (“Data Science” and “Neuropsychology” are the best labels I can think of), it appears difficult to understand both the capabilities and, perhaps more importantly, the limitations of these computer applications. More background about your author follows. But the belief that automata (think: “fancy toasters”) might develop intentional behavior is analogous to a “Jedi mind trick”. Based on a daily set of interactions with both business leaders and academic scholars, there is a strong consistency in the strength of the delusion with the absence of education in the life sciences, especially neurobiology and neuropsychology, and more so yet if one’s original training was in software engineering or data science. I am one of the relatively few scientists to be trained in both fields simultaneously (in the first half of the 1980s), and to have had had two 15-year career arcs, one in each field (I remain a licensed psychologist with postdoctoral certification in clinical neuropsychology, although I am now transitioning to a third career in what is known as “computational neuroscience”. It is so obvious to me that so-called Artificial Intelligence does not and, in fact, cannot ever exist – and not only because we cannot define “intelligence” in the first place. The (or my) problem is not really, therefore, with the use of term “intelligence”. The problem is that the term has become a linguistic basis on which to support the immanent usurpation of the human species. In truth, not only are we not on the cusp of displacement by any army of Cleatuses; more realistically, I do not expect any future version of common “chat bots” such as Siri, Alexa, or GoogleVoice to become “smart enough” to be of any real value during my lifetime.??
I am a licensed psychologist with the highest respect for another individual’s beliefs, however misinformed I feel they might be, so long as they are not used to justify behaviors or actions that harm others, either carelessly or intentionally). And yet I find It difficult at times not to lose patience with data scientists or computer engineers who use the term “AI”. I sometimes cannot help myself but to chide them for being, if not foolish, then at least deeply misinformed. My position has, at times, been a very unpopular opinion to express publicly.
Of course, the reality is that the problem is far more mine than anyone else’s: my irritation arises not from their acceptance of what is currently, at least, a seemingly ever-growing popular fallacy. Rather, at least part of my objection to the term AI is that more informed, educated, and multidisciplinary scientists have not spoken up, as a profession or individually, to challenge the use of the term.?And the term is simply, and blatantly, misleading and potentially hurtful to both people and the societies in which they live. Even??Jeff Hawkins (of the “biologically-inspired AI company?Numenta) uses it, while repeatedly?making a series of nearly indisputable arguments?as to why it is, and always will be, impossible to fully achieve human-level AI. But in the absence of more individuals with either the multidisciplinary training, or just people willing to do the research like the NY Time’s Cade Metz, what we see as obvious, others see as potentially threatening. The fallacy of AI not only continues unabated, but seems to grow faster, itself doubling every few months. Life scientists, who lack the necessary knowledge or investigation of data science, can hardly be blamed for their ignorance. And many of the greatest of data scientists are, obviously, equally oblivious. Therefore, when presented with the possibility of “SkyNet,” as the SNL skit and many other examples in popular culture and social media have repeatedly demonstrated, the perception of reality can easily trump reality. Unfortunately, in some cases the misperception carries the possibility of causing real harm to both individuals and various levels of social systems in which the fallacy appears.?
This is not to say that these systems cannot threaten people’s jobs. They can, and when they are proved to do better than people 95% of the time, and the risk of harm to people or society for the other 5% is small or nonexistent, then it is equally as foolish to argue that people should be retained for such jobs. But that is due to the nature of the job, not to the qualities of the person currently employed to do it. Who would truly want to have to work counting words or letter frequencies in documents or delivering packages if these tasks can be automated? Given an opportunity to make equal pay for more meaningful and enjoyable work (even if they were required to learn or be trained to perform it comfortably for some period), virtually no one would insist upon keeping a job delivering packages.
But all that shows is that menial – and, if we are willing to admit it, largely dehumanizing – jobs still need to be done by people because the technology has yet to be scaled for economic success. This is not providing any society with value. The true question of interest is “what does it really mean to be intelligent”? And the shocking truth is, no one has a good answer, Nonetheless, if one were to study the nature of human cognition in the way that, say Tversky and Kahneman, did, there is little doubt that the perception of intelligence in automata is a?prima facia?exemplar of “fast thinking” – a cognitive error based on convenience but, which when studied scientifically (“slow thinking”), is utterly fallacious. For those not aware, Daniel Kahneman and Amos Tversky were two Israeli-born psychologists who practiced most of their careers in the U.S., and who, in so doing, also examined the possibilities of “artificial intelligence”. It is safe to say that both of these Nobel laureates dismissed AI in the sense of a thing that could evolve intent as an ideal example of a pattern matching error made below the level of conscious thought. They were awarded a Nobel Prize in 2002 (for Amos Tversky, it was posthumous) for their lifetime’s work in brain-based mechanisms of belief formation and pattern recognition.?
To delve more deeply into the topic requires different venue than is possible in an uninvited guest essay. Nevertheless, I will state with high confidence that there are possible dangers to society – even if mild ones at worst – associated with unconstrained use of the term ‘artificial intelligence’. To the degree it either creates or has become unfairly associated with fears that today’s (and, in my almost unique opinion, any future) computer systems are on the cusp of “an uprising”, then its use should be abandoned or, at least, qualified. In such circumstances, by its mere usage, fairly or not, the perception of an imminent danger which is either decades away at worst, or to me, plainly impossible, the term is objectionable and appropriate steps taken to limit its misuse.
What’s wrong with ‘machine learning’? Now?there’s?a term with which most of us, at least and including myself, can agree to be comfortable with!
Submitted by:
William A. Lambos, MS. Ph.D.
Computational Neuroscientist
Licensed Neuropsychologist, BCN
CEO, American Brain Forensics, LLC. (d/b/a Computational Neurosciences, Inc.)
LinkedIn: https://www.dhirubhai.net/in/william-lambos-a56878156/