Reality check: What is AI, approaching 2021?
Mats Lewan
International Keynote Speaker, Futurist, Senior Advisor, Author, and Journalist
Holding lectures on future and digitalisation, and on how to understand the change that is coming over us, I often meet the question about what AI is. Can machines be intelligent? How intelligent is AI? How far can AI reach?
Here’s my view, as we are approaching 2021.
Let’s first have a look at why there has been so much attention on AI in the last few years. Essentially it depends on unparalleled progress in a short time frame, which in turn doesn’t depend only on the development of the methods, but more on increased computing capacity and access to huge amounts of data.
One of the main buzz-words has been deep learning – a method inspired by the human brain for making machines learn recognizing patterns by training on known data. The method in itself isn’t new, but it has become much more successful through large computing power and big data.
Recognizing patterns might not seem that important, but it is an essential part of what the human brain does all the time. And deep learning applications are so good at this that they can learn to copy human behavior at an astonishing level, and sometimes even outperform humans at certain tasks.
Typical applications range from image and sound recognition/interpretation, translation, playing games, natural language processing (interpretation), and complex behavior recognition to writing stories and driving cars.
Within any of these fields, the possibilities are huge, and often our fantasy is the limit for how AI can be used. Chest X-ray interpretation and diagnosing diseases from your breath are two promising examples.
One of the most amazing results that were presented in 2020 is the text writing system GPT-3 developed by researchers at Open AI. The system is trained on a corpus of over 1 billion words in texts written by humans, and it has only one function: To guess the next word. But GPT-3 does this so well that it can compose impressively well written long texts of any style and on any subject.
Apart from interesting discussions on the opportunities and the risks of a system that writes texts much more efficiently than humans, and at a much lower cost, GPT-3 also offers interesting views on what AI still isn’t.
Although the system writes astonishingly well, it reveals its lack of humanlike intelligence by funny errors such as a “puddle of red gravy” in the front yard of Hemingway’s farm, Finca Vigia, outside Havana. Although the error was made by a previous version called GPT-2, it is a good example of something I often try to explain:
AI of today is incredibly good at copying human behavior, so good that it can perform lots of human tasks very well, and much faster, but when you knock at the door of the AI, there’s no-one at home.
This has been expressed in many other ways — AI has no common knowledge, AI doesn’t understand cause and effect, or that AI simply doesn’t understand things.
This has a series of important implications:
1. AI-based systems can make dangerous and unexpected errors. Since there’s no understanding or common sense, there’s no function for correcting a calculated result or an action that falls outside of what humans would consider to be normal or making sense.
2. Not having understanding or common sense, AI-based systems need to be trained on huge amounts of data to cover any kind of unexpected situation, since we cannot know how it will handle the unexpected.
3. Today’s AI is still far away from the human capacity to generalize and asses “the whole” by reasoning and experience.
The combination of these three implications is in a way why bias in AI-based systems has become an important issue. Since you need large amounts of training data you can have a built-in bias in the data. And since there’s no understanding or common sense in the system, there’s nothing that can observe the consequences of the bias.
The combination is probably also why it has turned out to be difficult to make autonomous cars ready for market, even though they are better drivers than humans in almost all situations. It is the last few percent that makes the important difference.
You can also put it like this: Human drivers only depend on the human eye and the human ear as sensors, which are very limited compared to the immensely advanced sensors used on autonomous cars. This is because intelligence compensates for limited data—if you have less intelligence you need much more data.
This is also why most don’t expect AI-based systems to take over jobs, but to assist humans by taking over repetitive and tedious tasks in many jobs and professions, letting humans focus on more advanced tasks that humans still are better at doing.
Or in other words: In the best of possible worlds, machines will let humans become more human.
More precisely, what today’s AI lacks compared to humans is the capability to generalize, and human-like consciousness.
Our consciousness is what makes it possible for us to imagine things, plan, solve new problems, and interact with other humans in a truly social way.
Our emotions are not necessarily impossible to copy. On the contrary—they could probably be modeled as complex patterns that are quicker and more efficient than reasoning for making quick decisions, perfectly usable also by machines.
But our emotions in combination with our consciousness—the experience of feeling emotions—is what still makes us unique compared to machines.
I use the word still. Because even though we don’t have a good model for how consciousness could be built—we don’t even know what consciousness is or how to prove that anyone has it—this doesn’t mean that we will not have such a model one day.
You could even find it a bit presumptuous to think that humans are the end of evolution and that only humans can become conscious. And it is hard to see why AI couldn’t become conscious one day.
This is also where perspectives start becoming vertiginous.
Consider the capability of today’s AI to assess and process huge amounts of data, and learning patterns, outperforming humans by orders of magnitude. If you add a humanlike capability to generalize and a humanlike consciousness to this, you would most probably turn AI into something we might not even be able to grasp. Looking for an explanation of the term superintelligence, this would perhaps come close.
This, however, is a future perspective, and not yet close.
Returning to what AI is today when we are approaching 2021, what you should remember is that AI-based systems are already amazingly good at copying human behavior, and they will become even better. But for now, when you knock at the door, there’s no one home.
This is what you should keep in mind.
(This article was also published at my blog: The Biggest Shift Ever)
I'd be glad to share more perspectives with your organisation or with your audience. Please don't hesitate to contact me.