Can AI really take over the world?
Can Machine Think? This has been a conundrum for Scientists and Philosophers for last 50 odd years. The genesis of this began with one of the principal inventors of modern computer, the great British mathematician Alan Turning. It is he who first figured out the design concept of a programable computing machine, now known as Turning Machine.
In 1951, Alan Turning remarked “If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position … we should, as species, feel greatly humbled”. Thought provoking indeed and human curiosity took that journey forward to progress Artificial Intelligence, where intelligent machines are capable to deliver pre-defined tasks without human intervention. But A Thinking Machine is still a far cry!
Now the question is can we dismiss the possibility of Intelligent machines overtaking the world or manipulate human to do so? Most of us will think these ideas as Science Fiction and even will start drawing analogy to movies like Terminator style robots taking over. However, in current potential capabilities of AI and machine intelligence, one can safely say that it might still be some time before Artificial machine reaches that state of Human Level Artificial General Intelligence (AGI).
The idea of AGI goes way back when IBM’s deep Blue System in 2002 , attempted a narrow AI approach , that successfully defeated world chess champion Gary Kasparov but unfortunately that same skill could not solve any other problem domain without substantial human engagement and reprogramming. This attempt was followed by IBM Watson’s Q&A system in 2011, that defeated 2 well know quiz champion, however the system again failed to respond to more core experience-based knowledge of human. Since then even though research and development of AI Systems substantially leapfrogged, however, human reprogramming and relevant data scoring remained a key prerequisite every time, to address a new domain or any new type of task.
When we map the current AI technology and tools, whether it’s a Machine Learning or Deep Learning or NLP based program to deliver already defined data driven scores and repeatable tasks in business like Support Centre, or task orchestration at Insurance to customer engagement automation, and in connected car etc., such deliverables are already showing promising results. But such roles of AI are generally restricted to unique domain or task that they are programmed for and are still far away when compared to basic cognitive capabilities and common sense of even a typical six year old kid, let alone a fully educated adult professional. Most AI capabilities are restricted to defined high-availability metrics.
Let’s look at the barrier to AGI - The primary challenge is the heterogeneity of general human intelligence and our technical abilities so far to develop a comprehensive and delicate measurement system that can recognize fine-grained task and replicate Human like recognition, interpretation, and visualization to facilitate sensitive and expressive actions. The next more difficult part is mapping infinitely diverse characteristics of Human Cognitive functions, in developing a super intelligent machine that is supersensitive like human. Interestingly, in the beginning of this century most scientific approach on this subject was dealt with a principle belief that General Intelligence was biologically determined and progressed over millions of years. Scientist, philosophers, and technologists over the last 100 years have re-defined and continuing to build their research on the hypothesis of human intelligence. While this leads into another huge area in understanding and functioning of human brain, that I would like to keep aside from the current topic of discussion
The most intriguing part for human to create an Intelligent machine needs to start with the fundamental understanding of intelligence and multiple variability across individuals Intelligent Quotient. Can we define intelligence as a single, undifferentiated capacity of an individual? There are different concerns around that. Let us look at individual performance across knowledge domains and its correlation, it is not unusual for individual skill levels in one domain to be considerably higher or lower than in another, this is defined as intraindividual variability. In the second case, two individuals with comparable overall intelligent levels might differ significantly across specific knowledge domains and can be termed as interindividual variability. In 1983 Dr. Howard Gardner Professor of Education at Harvard University had come of with a theory of multiple intelligence. And he proposed eight distinct forms or types of intelligence: (1) linguistic, (2) logical-mathematical, (3) musical, (4) bodily-kinesthetic, (5) spatial, (6) interpersonal, (7) intrapersonal, and (8) naturalist. Gardner’s theory suggests that everyone’s intellectual skill is represented by an intelligence profile, a unique mosaic or combination of skill levels across the eight forms of intelligence.
According to Gardner's theory, one form of Intelligence is not better than another; and in fact, they are all equally valuable and viable. Yet, he discovered that different cultures are biased towards and against certain types of Intelligences. For instance, we may favor Linguistic and Logical-mathematical Intelligences while undervaluing others, such as Naturalistic or Body/kinesthetic Intelligence.
Gardener’s theory of 8 Intelligence type then can be further sub-divided into some of the key competency areas (as below) that associates to Human Level General Intelligence.
Perception includes vision, smell, touch, taste. Actuation includes physical-skill, navigation proprioceptive senses etc. Memory includes working, episodic, semantic, procedural. Reasoning relates to induction, deduction, abduction, associational etc. Attention associates to visual, auditory, social attributes. Communication includes gestural, verbal, pictorial, diagrammatical, language acquisition etc. Social Interaction involves skills like communication, social inference, relationship, cooperation, and competition. Learning competency includes imitation, reinforcement, experimentation. Planning involves tactical, strategic, physical, social etc. Emotion includes understanding, sympathy, empathy, perceived, expressed. Motivation associates to sub-goal creation, affected-based, deferred gratification, selflessness. Modelling Self & Others needs self-awareness, environmental awareness, self-control, relationships, sympathy etc.
The above competencies and sub-areas are only a fraction of multiple variability that constitutes human general intelligence. For AGI to encompass even one intelligent type to deliver a full human-level artificial general intelligence in that specific domain alone will require innumerable multidimensional learning transformation to match up to human cognitive and intellectual abilities. Over and above that, Intelligent Machine have to master multiple competency elements and its sub-elements within that intelligent type. AGI also needs to be defined with intricate empirical properties of Language, Social, Cultural and Environmental knowledge, as that will be equally important to differentiate appropriate responses equal to human intelligence.
This for sure is now getting more complex than imagined. So, let us go back to the question of ‘CAN MACHINE THINK’? ‘IS IT POSSIBLE TO CREATE HUMAN LEVEL AI’? ‘CAN AI REALLY TAKE OVER THE WORLD’?
The most appropriate answer would be TIME, however, one thing for sure, whenever AI will reach even a near equivalent state of AGI, It would change the way we work and play, our sense of self, life, and death, the goals we set for ourselves and for our societies. And Evolution, civilization, and Human endurance repeatedly proved that change is inevitable, who knows it might lead to new beings and new ways of being. But it would also be of significance beyond our species, beyond history.
The question all curious minds will be longing for –
How? To what degree? And by when?
No fortune teller I am sure will even try to take any sort of calculated guess as it would be detrimental to their own business. However, what we know for sure that computing power has doubled in every 18 months in last 4 decades. And is expected to touch 10 Billion MIPS (1 MIPS = 1 Million Instructions Per Second) by 2030. A 10 Billion MIPS processing power can be equated as Human Brain Equivalent.
Is there a significant chance to witness Human level AGI? Your guess is as good as mine.
- 50% chance by 2050
- 80% chance by 2100
- 10% chance never
The idea I presented in this article is based on my finding and understanding from several research papers and articles of some scholars in the filed as provided in the reference below.
At the end I want to leave with this thought. AI is manifested through computer generated code. If a computer generated binary code is inserted into physical strands of DNA* , (the possibility has already been tested by a group of researchers in 2017 at the University of Washington with an aim to show that computers working in gene sequencing were vulnerable to attack) goes to prove that what we understand to be biological reality may be just computer code all along.
Souma is one of the Co-founders and Chief Strategy Officer at iFIX tech Global. He currently focuses on company’s GTM strategy and Global market expansion. Previously, he was Managing Director at Teradata India. https://www.dhirubhai.net/in/soumad/
References: Adams S, Arel I, Bach J, Coop R, Furlan R, Goertzel B, Hall J S, Samsonovich A, Scheutz M, Schlesinger M, Shapiro S C, & Sowa J (2012). Mapping the Landscape of Human-Level Artificial General Intelligence. Theory of Multiple Intelligences: Dr Howard Gardner
Faster, Easier, Safer and Together for your Digital Transformation
4 年Hope by the time AGI arrives, humans’ new partner will have developed a justification to respect humans and our wishes.