What is "Intelligence"? Is ChatGPT smarter than my parrot?
Andrei Cernasov, Ph.D.
Author, Innovation Consultant, Creativity Expert, Trainer, Speaker
I asked ChatGPT if it is smarter than a parrot, and it, predictably, equivocated! Its "intelligence," it said, "is limited to processing and generating text based on patterns in the data it has been trained on." But it did call it “intelligence”! On the surface, Coco, my parrot, is also intelligent. After all, I trained him like others trained ChatGPT. I taught him to say, "Let's go to sleep," when he wants his cage covered at night. In time, this became equivalent to "I am Groot" from the "Guardians of the Galaxy" trilogy; He repeats it every time he wants attention, which proves he doesn't understand the meaning of my words but knows what he wants.
I decided to go deeper. I was in the middle of writing a book (who isn't these days?), and I was looking for a working title. Since the book explores how the human brain develops new concepts, I asked both Coco and ChatGPT if "Mindforming" is a compelling title. ChatGPT replied, "The appeal of 'Mindforming' as a book title is subjective and depends on the book's content." So, should the audience read the book first and then pick it up from the bookstore's shelf based on its title? Interesting advice. I suppose authors could hint at a book's content with long and clever subtitles! Coco's answer was more concise but equally illuminating, "Let's go to sleep."
Undaunted, I went further. "What do current innovation books fail to cover?" I asked. Within seconds, I was blessed with a promising list: Cross-Disciplinary Innovation, Societal and Ethical Implications, Global and Cultural Perspectives, Innovation in Non-Profit and Government Sectors, Innovation in Small and Medium-sized Enterprises (SMEs), Human-Centered Innovation, Interplay between Innovation and Regulation, and Collaborative and Open Innovation.
However, a quick review of innovation books on Amazon showed that many other books have already explored these subjects. One category, Collaborative and Open Innovation, even ranks as a "best seller." Here, the ChatGPT-generated list was just wrong. For this question, Coco came up with the same prompt advice: "Let's go to sleep."
With his estimated one-and-a-half billion neurons, my parrot sensed the absurdity of my quest for good advice from ChatGPT. I could swear his poker face betrayed a bit of gloating. After all, I believed my one hundred billion neurons could detect false "facts." ChatGPT-3 was rated twice as intelligent as humans, some even claiming its Google equivalent, LaMDA, is sentient. It uses 175 billion parameters (weights) organized in 96 layers of "neurons" and data from half a terabyte of text (equivalent to 164,129 "Lord of the Rings" book sets). How can it be wrong?
And yet, with all that firepower, it could not figure that "Mindforming" is a blend, or portmanteau, of "mind" and "forming," a word designed to convey that brain plasticity can be manipulated, for better or worse.
Time to dispel the myth. ChatGPT is just a spellchecker on steroids! If a spell checker compares a candidate word with the 171,146 words in the English dictionary, ChatGPT compares a candidate word sequence (the prompt) against similar sequences found in the millions of books and online texts it was trained on. Then extends the search to cover patterns in the word sequences that follow.
ChatGPT may appear to inherit some wisdom from the creators of its training materials. That is why sometimes it gives the illusion of thoughtful responses; it falls short for more profound queries or prompts related to recent developments. However, its natural language interface can be very helpful as an entry point to semantic search engines, while its "generative" feature can supply basic knowledge on almost any subject. And overcome writer’s block!
So, is ChatGPT smarter than Coco? Regarding the meaning of words, ChatGPT only tracks the frequency with which we, humans, use specific sequences. From his window perch, Coco uses different sounds to inform my German Shepherd, Max, who lives upstairs, which one of his furry buddies is parading in front of our house. He does "understand" the meaning of the sounds he makes. He seems to enjoy the sights and sounds of Max barreling down the stairs and loudly greeting his friends.
Coco's mind, the “software” running on his neuronic brain, can simulate the outcomes of his actions and choose the ones that will obtain the desired effect: being covered for a good night's sleep or summoning Max to action. Coco's mind holds a model of Max, me, my house, and the other dogs in the neighborhood. Coco will update this model as my neighborhood’s menagerie changes. And he can predict how I and Max will respond to his prompts. This is what intelligence is all about:
ChatGPT does not have intrinsic "intelligence" because it can do none of these things. For now, it can only model language, just one facet of the wide range of human behaviors. Even so, language is only one of the ways people communicate. It has no model of me and, therefore, cannot simulate my responses. Its book-related advice was flawed because it interpreted its training materials as strings of characters.
I could have started to write yet another "Collaborative Innovation" book if I followed its advice. A bit of due diligence guided me to the neuroscience aspect instead. In conclusion, I fear Coco more than ChatGPT's bad advice, and with good reason. Coco took a bite off my finger some time ago, before I understood who the real boss was. He trained me, and I was intelligent enough to learn.
The same due diligence requirement goes for all the other branches of the so-called Weak AI, which, besides language processing, includes image recognition, expert systems, planning, and robotics. Their abilities all come from what we teach them to do. They are all as dangerous or valuable as we train them to be. And that is why they can be and should be regulated. We don't each get to choose which side of the road we drive on. If we did, the world would stop.
Yet many seem terrified that AI is taking over. Documentary filmmaker James Barrat wrote a whole book on the subject ominously titled "Our Final Invention - Artificial Intelligence and the End of the Human Era." Much like in the early days of the Industrial Revolution when many were decrying the end of burly men and child labor.
Yes, some students will fool their teachers with ChatGPT-drafted essays, and some bosses will sound smarter in writing than in person. Some jobs will change, and a small number of jobs will disappear or be replaced by new ones, like green energy jobs are replacing coal mining jobs. But that is a small price to pay for liberating everyone from "googling" single words and the tyranny of English grammar.
More concerning should be why 23 million people, allegedly gifted with 17 petaflops of brain power each, believe chocolate milk comes from brown cows, according to one survey. All without ChatGPT's help!
What about Terminator-type Strong AI or Artificial General Intelligence (AGI)? That is a long way off, but it will be the subject of a future post.
Our next post will be on the topic related to my book "Mindforming: The Good, The Bad, and The Ugly!"