Why the Technological Singularity may not happen?
Joydip Bhattacharyya
Deep Learning, ML Enthusiast || Associate, Tech Strategy & GenAI Dev @ Invesco || Gold Medallist || GEHC HACK'E'LTH 2023 National Winner
In recent times, COVID – 19 has wreaked havoc across the globe. The lockdowns imposed by governments in various countries has thrown us half-a-century into the past. However, another fear which has gripped many of us this decade is the thought of Artificial Intelligence replacing humans for good. Once that becomes a reality, humanity would be thrown back into a very distant past, most probably to the time when humanity itself did not exist.
In March 2016, AlphaGo, a Go-playing software developed by Google’s DeepMind, defeated the human Go champion Lee Se-dol 3 games to 1. About 19 months later, in October 2017, DeepMind developed a second version of the said software, AlphaGo Zero, which beat its predecessor by an astonishing 100 games to nil. What was even more surprising was the fact that AlphaGo Zero surpassed AlphaGo’s learning in just 40 days! These developments led to people in the Silicon Valley to believe it was a matter of time before we have genuine thinking machines.
One reason for this belief is the terrifying idea of technological singularity. In the book “2062: The World That AI Made” by Toby Walsh, this concept is described as “the anticipated point in humankind’s history when we have developed a machine so intelligent that it can recursively redesign itself to be more intelligent”. At this point, humans will not be the most intelligent beings on the planet, and a constant fear that we will not be able to monitor and control the development of this super-intelligent AI, looms large. And when that happens, it could lead – intentionally or unintentionally – to the end of humanity.
The supporters of this concept – who happen to be futurists and philosophers, and not AI experts – feel this is a logical certainty, that it is more of a “when if?” than a “what if?”. Several well-known technical minds, including Elon Musk and Bill Gates, have also warned that super-intelligent AI may not be favorably disposed towards the human race.
What has not been shown, however, is the scientific evidence for the singularity, and after gaining thorough insights from articles by renowned AI researchers, I too have considerable doubt over its inevitability. Some of the reasons are discussed below:
Anthropocentricity
Even modern biologists accept that the mechanisms contributing to human intelligence is still not completely comprehended. Because of this, there is no good model of human intelligence for computers to emulate. Intelligence cannot be measured on a linear scale. One might argue that the IQ scale is quite reliable, but even that is morphed by measuring standards of various institutes. The entire concept of intelligence is quite abstract and vague. So much so, that we still have not answered a very famous question, “Who is the smartest person who ever lived?”.
There also appears a paradox: if we are smart enough to build a machine smarter than us, then this machine should be smart enough to redesign itself and become an even smarter machine, and so on. There is no logical reason to support this. We might be able to build a machine smarter than ourselves. But that smarter machine might not necessarily be able to improve on itself. In most occasions, it may not be able to better itself, without human intervention. Even in the case of AlphaGo Zero, human intervention in the form of the employees at DeepMind contributed to it becoming the best Go player in the world.
It is quite evident now that human intelligence, whatever it is, is the hypothetical tipping point for the technological singularity. However, we might not be smart enough to build such machines, since human intelligence is, noticeably, not the same for all.
Thinking fast is not thinking smart
The speed at which signals travel through our neurons is around 100 metre per second. Fast, but significantly slower than the speed of sound (330-340 metre per second). Signals in a computer, on the other hand, travel at the speed of light, around 30000 kilometre per second. This speed doubles in its effect due to Moore’s Law, that the number of transistors that can be etched onto a sliver of silicon doubles at roughly two-year intervals. Computer speeds may have plateaued, but the speed at which they process data increases regularly.
Proponents of the singularity cite this very property of computers as the factor leading to smarter machines. But processing speed alone cannot culminate to the singularity. Toby Walsh, in his book mentioned earlier, put forth his “fast-thinking dog” argument. A fast-thinking dog would most likely behave like a dog, albeit faster than before. Similarly, faster computers alone will not achieve more intelligent computers.
As stated before, the true properties of intelligence have not been discovered yet. But it is quite clear that a faster processing capability does not yield intelligence. It is a combination of past experiences, common sense, the ability to refine our thoughts, and much more.
Intelligence is not the only criterion for super-intelligence
There are some other facets which contribute to intelligence, that we know of. And it will be quite a while, maybe even forever, before machines begin to imbibe these qualities. Two of these qualities are emotions and creativity.
There have been numerous AI algorithms which are used for translating languages, or even transcribing them. They have by far done their tasks better than humans have. However, the algorithms have not improved upon itself to perform tasks like composing a sonnet. It requires human supervision for that. We have also heard of AI chatbots meant to interact with social-media users. I feel it is safe to say that we have met with little, if any, success in this regard; simply because these chatbots have just one emotion accompanying them: no emotion; and at best, contempt. It would definitely be an uphill task for researchers to teach an algorithm how to be compassionate, kind, sympathetic or empathetic. For all we know, we have some chatbots who can talk in a language only understood by them, the Facebook chatbots are an example of that. Frightening as this may sound, there is hardly any chance for such software to evolve into super-intelligence, as they lack all the other facets to become intelligent, let alone super-intelligent.
To conclude…...
Every deep-learning algorithm designed till date has been a culmination of many humans thinking long and hard about the most efficient way to perform a task. Coupled by the fact that at this stage, Moore’s Law seems to be coming to a grinding halt, it will be a mammoth task for humans to build thinking, self-improving machines. It is highly uncertain if we may ever be able to build one. So, relax!
Intern @ Nvidia | TA @ USC
4 年I agree with you that there is a hype for AI. But what you have mentioned, "They do not have emotions", I feel like they can be improved as there is a lot of research going and being published on sentimental analysis with can be extended to machines as well. And coming to creativity, an AI team created a hide-and-seek game where the machine created it sown way of finding. Maybe they are not that intelligent but they do have the potential to create and have emotions. Anyways, good article