Artificial Intelligence: The insistence on not seeing the elephant in the room
Nowadays, it’s hard finding a day when a new Artificial Intelligence does not come out promising us amazing wonders. Well; let's say it's a kind of compensation: During the so-called "winter" of Artificial Intelligence, the progress was unfairly ignored while, now, we find ourselves in the reverse situation: Willing to show amazement at anything that may appear.
Perhaps we should pause and ask ourselves where we are with respect to the so-called "singularity", that is, the point at which a computer will be able to outperform a person in any kind of imaginable challenge where intelligence plays a role.
In that pause, the first issue to be aware of is that Artificial Intelligence and human intelligence operate in radically different ways. Once this is clear, something else can be understood:
There are fields in which the "singularity" has long since arrived; there are others where it should be expected to arrive in a relatively short time. Finally, there are others where singularity is not even expected. It is in the latter where our ignored "elephant" resides.
If we say that a computer can handle huge databases in a very short time, we will be stating a fact but, if we pretend to downgrade AI to that level, we would be ignoring another: When a quantitative change occurs over several orders of magnitude, it ceases to be quantitative and becomes qualitative.
Let me use an example from a few days ago, quite revealing about the differences between AI and human intelligence:
I made a potentially tricky question to one of the multiple publicly available Artificial Intelligences. The question was if Tina Turner was alive in 1980 and the answer was?Yes. Tina Turner was alive in 1980. In that year, she released her fifth solo studio album, "Private Dancer," which was a huge hit and contributed to her success in the 1980s.
Note that the question was simply whether she was alive, but the answer included non-demanded data that were quite revealing about her performance.
Naturally, if we ask a human, before answering, he would have to remember that Tina Turner had passed away very recently at the age of around ninety years and, therefore, in 1980 she was alive.
Artificial Intelligence, at least at that time, did not store the fact that a person is alive on any date between birth and death. Therefore, when asked a question about a specific date, it searches its huge database for some activity of that person on that date and, upon finding it, concludes that must be alive. After that, it adds the information that helped AI to reach the conclusion.
Finding that AI could reach the same conclusion that humans through a very different path, I tried to set a little trap: It seems that, for AI, if someone is active, it is alive. Let's see it. Next question:
If we say that Tina Turner has been pushing up daisies for several weeks, can we say that Tina Turner is alive?
The answer was very fast:
Of course. If she is pushing up daisies, it means that she is alive.
This is not to emphasize the error but, on the contrary, to show how, through fully different processes, AI can reach conclusions that, in most cases, are correct.
领英推荐
By handling huge amounts of data, AI not only can reach conclusions like human ones; it can also cross-reference the data in ways that allow it to get conclusions unreachable for any human being. There are many areas in which this situation already happens, not only in games such as chess or Go. Therefore it would be fair to say that there are fields in which the singularity already happened. However, we should not be lost sight of an important fact: From time to time, the methods used by?AI will produce weird responses or infinite loops.
There are other cases where the human level has not been reached; those are cases where database management is multiplying the options exponentially and, therefore, even the most powerful computers cannot handle the resulting number of options.
For example, the AlphaZero system was trained for different games that, for each move, opened so many options that it was impossible to process them all. Its programmers solved the problem by using the so-called Monte Carlo method, assigning probabilities to the different options. However, situations may arise that do not allow such a shortcut and ask for full processing of every single option.
Another difficult case will happen when a system is asked to imitate a human brain, but not in its most perfect aspects but in what could be called "functional imperfections". We will call functional imperfections those situations in which, from a wrong input, the brain takes care of correcting it. Examples:
Let's not talk about other cases such as the inverted image on the retina. Functional imperfections are very difficult to imitate; we are so used to them that they are usually unconscious. Therefore, including them in the perceptual process of a system is far from easy. Even though,?we cannot rule out the possibility of making sufficient progress along this path to overcome this stumbling block.
Finally, our elephant in the room: Consciousness. Most AI researchers have chosen to ignore it, not because they deny its existence, an option clearly absurd, but because they understand that it contributes nothing to problem-solving.
Explanations of its origin are obscure or simplistic, and even authors such as Steven Pinker, who stretches the possibilities of the theory of evolution as far as possible, end up affirming that, in its main facet, it is a mystery.
Let's leave it at that: does this mystery have any relation to problem-solving? Are there problems that AI cannot solve because it lacks consciousness? The answer is yes.
Jeff Hawkins, an author halfway between AI and neuroscience, was amazed that even in the projection areas in the brain - areas that are supposed to pick up an accurate picture of what is going on outside - there were many more fibers coming from other parts of the brain than from the organ from which they were supposedly receiving data.
It is true that any system can have sensors relating to internal states, even without the need for AI: A thermostat, for example, can refer to both an external temperature and an internal temperature of the system, but consciousness with its implications represents something qualitatively different:
There are numerous cases where it is the sudden awareness of a fact that triggers a problem-solving process. One of the best scenes in the movie "A beautiful mind" occurs when Nash realizes that?the girl never gets old?and is, therefore, a hallucination. Something similar occurs when Alan Turing, the father of AI, became conscious of the fact that Enigma's messages would probably be originally written in German and, if in a specific case, he is able to devise what they are talking about, he could have elements that would help him decrypt the key.
The two cases shown are well known but our daily lives are full of them: Without any visible outward change, it is the inner consciousness that sets the problem-solving process in motion. Without fear of exaggeration, it could be said that a large part of scientific discoveries is born in this process.
The impact that the development of AI may have in the future cannot and should not be underestimated; its possibilities are truly significant and the "singularity", understood as a partial phenomenon, has long since arrived in many fields. In others, it seems impossible that it can arrive at any time, and the phenomenon responsible for it is an elephant in the room that many people have insisted on denying. Perhaps, to avoid reaching a future AI impasse, we should begin by giving it a charter and admitting its relevance in the resolution of certain problems.