The Death of True Intelligence?

The Death of True Intelligence?

[Alternative title: Quest for True Intelligence]

Much has been spoken recently about the danger of making computers super intelligent. The concern being that they soon will surpass human intelligence and precipitate the imminent slavery of mankind or even its obsolescence. Celebrated scientist Stephen Hawking and visionary business man Elon Musk recently joined the bandwagon, raising such concerns without perhaps being fully aware of the true state of artificial intelligence (AI), the discipline that is considered to be the enabler of such a fate. As someone who has been at the forefront of realizing the AI dream, I can safely assert that in terms of reaching such a state, we are as close to achieving immortality. Put even better by Stanford professor Andrew Ng – “it is an unnecessary distraction and worrying about super machine intelligence is like worrying about overpopulation of Mars.” To understand my rather pessimistic point of view (but optimistic for the fearful ones) one has to be taken through the evolution and current state of AI at a slightly technical level. To this extent I argue why current research and business trends are making things worse in terms of creating a true form of intelligence, if that is our prime goal.

Misguided Prediction

The two analogies above underline the far reaching state of singularity, a term popularized by Kurzweil defining a point in time where machines will take over humans. A tall order claim in his 2005 book that “Within several decades AI technologies will encompass all human knowledge and proficiency.” The path of singularity is an incremental path much like increasing our life expectancy with the increasing use of electromechanical devices. Kurzweil’s Singularity assumes exponential growth and disruptive innovation but did not account for monstrous big data obstacles among other things. Beside, a conscious malicious collaboration among intelligent entities seems a far cry given that we are still scratching the surface in terms of creating a truly autonomous intelligent entity.

The recent pace of automation is often cited as proof of AI eventually ruling the world. However, an automation, be it a self-driving car, a robot cleaning the floor, flying and landing airplanes, credit risk assessment or filtering your spam mails, is achieved in a very well-constrained world where all possible normalities and abnormalities for a particular task are well-laid out and modeled for computer implementation. So for a self-driving car, we know what a route or obstacle looks like, what a traffic jam means and how to self-position and construct alternative routes. It can also perhaps learn by observing driving habits such as slowing down along a scenic route. But a human brain is much more versatile and able to deal with a tremendous breadth of tasks such as cooking, cleaning, studying, handling machines, etc. Moreover, a human can learn from successes and failures and is able to communicate and collaborate with the external world. Understanding and modeling or “reverse engineering” such a human’s mind is elusive and has become one of mankind’s greatest challenges.

Heydays of Symbolic Artificial Intelligence

I entered into the world of AI and intelligent software and cognitive agents in the mid-80s for my doctoral studies. Those were the heydays of AI when the perception was that the ultimate intelligence can be achieved using just a symbolic representation of human knowledge. Hundreds of papers were published on more and more complex logics and associated inferencing to take us beyond simple Aristotelian syllogisms and modus ponens. Many different ways of inferring negative information from knowledge bases with researchers arguing about whether Penguins can or cannot fly. How do we take monotonic classical logic to non-monotonic logic using the human ability to revise our own beliefs? Many academic conferences and journals sprung up and the research funding kept pouring in.

There was a tug of war around the fundamental nature of human reasoning with two camps – symbolist as is based on various forms of mathematical logics and the subsymbolic camp supporting a form of connectionism based on various forms of neural networks. Brain researchers found a way of modeling neurons, connections, and the flow of information. Disagreements within the subsymbolic camp were as profound as in the logic camp. As scientists in the logic camp embrace beyond propositional and first-order logics to higher-order classical logics and various non-classical logics, the subsymbolic camp continued to invent various connectionist models on a regular basis.

Knowledge Acquisition Bottleneck and the Failure of AI

It eventually became apparent that to make computers human-like intelligent, a vast amount of knowledge representing commonsense is required, and that there is a huge bottleneck to acquire it. The advantage the connectionist camp had is their method builds such models from observations almost without human intervention thus avoiding the knowledge acquisition bottleneck. On the other hand, the symbolic camp criticized the connectionist approach for not having any kind of explanatory power, notably with regards to classification, recognition, or recommendation for decision support.

Commonsense knowledge is what everyone knows about the world, about how physical objects work, how people interact, how you yourself think, animals, the concept of space & time, etc. All of this common knowledge we generally share as a species and use to inform our every decision. The sheer amount of data involved makes it difficult to teach a machine our perception of commonsense. What’s troubling the NYU professors Ernst Davis and Gary Marcus is the lack of research and attention given to commonsense reasoning and AI. “There is a lot of knowledge probably needs to be hand-coded”. Google translator, which makes use of very large parallel corpora, has “no commonsense at all and doesn’t actually understand the sentences you are typing in.” Neither is IBM’s Watson. As it plays Jeopardy it merely eliminates possibilities syntactically from billions of stored pieces of knowledge. A robot in a factory does not need much commonsense knowledge because it automates a number of routine tasks. The same is the case with Deep Blue chess encoding heuristics from the specific domain of chess. However, if I have a robot at home and ask it to put the plate in the dishwasher for example, it needs to know that the plate is not the cat and that cat is my pet.

Acquisition of knowledge used to be an area of active research but people have shied away recently. Late MIT professor Minsky spoke highly about logical systems like Cyc, an ongoing effort to accumulate common sense knowledge, which needs to be exhaustively constructed only once and then can be replicated. If I push off a wine glass off the table and it falls on a concrete floor, I know through common sense that it will break. But what if a sudden gust blew off a cushion from an adjacent sofa and placed it right on the floor where the glass will fall. There are infinitely many other things in the environment that can change the outcome and can prevent the glass from breaking. But one must draw a boundary defining what else will not prevent. The frame problem is the problem of determining what in the environment does not change arbitrarily. Humans excel in dealing with unexpected events but a robot requires a vast amount of knowledge encoding unforeseen circumstances.

Intelligence is always relative meaning a computer system may exhibit an intelligent behavior to a particular individual thus passing Turing test, but may not be as intelligent when facing an alternative interrogation. This is precisely why flashy ads by big corporations conversing with machines easily convince AI-illiterate consumers. A symbolic system is where knowledge is captured in logical formulae and inference is employed to deduce truth or falsehood of a formula based on its fundamental axioms. However, such a system will always be incomplete and cannot achieve an absolute form of intelligence because. According to Kurt G?del, there will always be a truth that humans can comprehend but that a system will not be able to determine its truth or falsehood.

Communication Criticality

The communicative aspect naturally relies heavily on language processing ability. Machines have to semantically interpret or understand exactly what needs to be done. Even in the simplest case when we pose a query to Google, it tries to understand the user’s intent, which according to Eric Schmidt, is the most challenging aspect of search. If I ask google to “show me a recipe of roasted chicken without garlic” then will include the ones that have garlic with a keyword search. Most such questions to Siri will redirect to a simple google search due its inability to interpret unforeseen questions.

Natural Language Processing (NLP) is fundamental to realizing the communicative behavior of an intelligent agent or a robot. Recent glorification of NLP and parallel corpora based translation is another blow for true intelligence. The very fact that the translation mechanism is not a semantic one but rather one of finding a similar dialogue that has been spoken before, does not represent a true form of intelligence. NLP state-of-the-art is largely confined to finding keywords or phrases. Far from understanding semantically and then generating sentences as answer and recommendation actions.

Arrival of Machine Learning and Big Data

The rise of the machine learning (ML) field from the 90’s was an obvious consequence of the knowledge acquisition bottleneck problem. As this trend evolved, we have inevitably arrived at the big data problem. Again, scientists are fascinated by “cool” algorithmic challenges, largely disregarding the challenge in dealing with noisy, incomplete, and disparate information generated from real word processes. It is one thing searching through petabytes of data and it’s a completely different thing complexity extracting patterns hidden in such huge volumes of data (i.e., abstractions of data that must grow to support robot intelligence) that human eyes would never be able to extract. A side note of caution – we are blindly infusing everything into the cloud without a solid understanding of how to process all that data on a large scale to extract actionable insights in with deep analytics.

The current ML research is mirroring the mistakes made by 80’s AI researchers, assuming every problem can be solved with a single ML paradigm.

Deep (and Wide) Learning Kitchen Sink

Deep learning stems from the connectionist approach and hence inherits all of its characteristics, both good and bad. Its kitchen sink approach is making the same mistakes that symbolists did, and that is focusing on a single paradigm to achieve intelligence. Throw all the observations into one learning engine and magically solve all problems. Why do I have to extract certain patterns by processing thousands or millions of observations when an expert can provide the same abstraction in seconds. Moreover, the performance of deep learning is still nascent as it takes hours to process and classify just several hundred thousand images. We are just at a primitive stage of auto captioning images and videos, and thus is far away from creating a robot’s emotion perceiving a scene.

Google Deep Mind takes a very fundamental approach to building an intelligent system and that is reward-based learning by just observing the action, much like what a baby does. It’s a step in the right direction for building a general intelligent machine but still requires elevation from a reactive to a knowledge-based deliberative mode with the injection of subjective knowledge.

Hybrid Future

A framework for true intelligence should be able to reason along what we do – amalgamate with subjective and often uncertain wisdom from experts with learning from observation. For almost three decades Prof. Tomaso Poggio at the McGovern Institute of MIT has been investigating and developing computational models for the area of the brain, called the visual cortex, that decodes visual information relayed from our eyes. He simulates this process on a computer using neural network models based on the known properties of real neurons. As per my personal communication with him, Prof. Poggio believes that there is a strong need for a hybrid simulation architecture that incorporates commonsense knowledge that we gather from experience. The view of Prof. John Fox, co-author of one my books and a cognitive engineering faculty at Oxford university, and MIT Lincoln Lab’s Dr. Sanjeev Mohindra think much along the same lines.

In Conclusion

I’m not a pessimist but a realist. Tremendous progress has been made thus far but glorification by big corporations and tall claims by individual researchers are disproportionate to the real progress. While disparate views are the keys to innovation, a concerted effort is required (something like multinational LHC research at CERN), to achieve true intelligence. Short term gains in business by creating gimmicks are ubiquitous in recent days. In that process we are impelling the death of true machine intelligence in whatever primitive form it currently exists.

About the Author

Dr. Subrata Das is the founder and president at Machine Analytics (www.machineanalytics.com), chief data scientist at Alphaserve Technologies, technology consultant at MIT Lincoln Lab, and adjunct faculties at Villanova School of Business and Northeastern University. In the past, Subrata held research position at Imperial College, London, and a lab manager position at Xerox European Research Centre. Subrata led many projects funded by DARPA and DoD. Subrata has been at the forefront of applied R&D in the areas of computational artificial intelligence, big data fusion, machine learning, and deep linguistics processing techniques. He has a PhD in AI and Databases and published many journal and conference articles.

Subrata is the author of several books including the most recent one, Computational Business Analytics, published by CRC Press/Chapman and Hall. Subrata has also co-authored the book Safe and Sound: Artificial Intelligence in Hazardous Applications, published by the MIT Press (Nobel Laureate Herbert Simon wrote the foreword of the book). Subrata regularly gives seminars and training courses based on his books.

Subrata’s linked profile: https://www.dhirubhai.net/in/subrata-das-1293354. He can be contacted at +1 617 797 1077 or [email protected].

Subrata thanks to Sebastien Das for editing an initial version this article.

Pic credit: clickypix.com

Bo Wang

W.R. Grace and Company- Your integrated Drug Substance Partner

5 年

Great knowledge ??????Bonnie

回复
Subrata Das

Gen AI Professor & Principal AI & Data Scientist

7 年

Some additional discussion on the article in the Artificial Intelligence group: https://www.dhirubhai.net/groups/37945/37945-6268761414710632448

回复

For me the issue is not so much that super machine intelligence will consume mankind- its that in their endeavors to achieve super machine intelligence one or more players in the AI industry may inadvertently exploit human curiosity in ways that might irreversibly violate fundamental human rights!! The abundance of interest in functional brain mapping and cognitive computing and a lack there of systematic effort to self regulate AI research methods is probably the more pressing issue of the day!

回复

China Intelligences are dying without knowing by outer environment. Machine can surpass human with less AI than we thought for brain is volatile or too naive to be deceived. It is happenning.

回复
Bart Barthelemy

President, Collaborative Innovation Institute + Founding Director, Wright Brothers Institute

7 年

Reinforces how millions of years of evolution has produced a very complex and phenomenal brain which will not be easy to reproduce.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了