We do not know how far we are from Artificial General Intelligence
HLAI 2016 hangover
A couple of weeks ago, I had the opportunity of attending The Joint Multi-Conference on Human-Level Artificial Intelligence (HLAI) 2016, held at The New School, in New York City. If I had to choose one take away from it, it would be the fact that we just can not yet predict how far we are from Artificial General Intelligence (AGI), that is, AI systems possessing general intelligence at the human level and ultimately beyond. The main reason for this lack of previsibility, according to the current leaders of the field, is simple: there are not enough people working on AGI, because most of the people are working on some kind of “narrow AI”, aimed at resolving specific tasks. We need more people working on AGI in order to accelerate, achieve momentum and start to see the real path.
The HLAI16 was held at The New School, in New York City.
HLAI 2016 was a combined effort between the four major conferences and academic events targeting work towards the computational (re-)creation of human-level intelligence:
-
AGI, the Conference Series on Artificial General Intelligence.
-
BICA, the Annual International Conferences on Biologically Inspired Cognitive Architectures.
-
NeSy, the Workshop Series on Neural-Symbolic Learning and Reasoning.
-
AIC, the Workshop Series on Artificial Intelligence and Cognition.
Think about it. The leading thinkers from different fields relating to human-level AI research were present and nobody was able to predict when their work will reach a mature level. This is a very interesting realization, because we are somehow used to always hear that AGI is probably 20 year ahead of us, no matter in which year you make the question. Even though it might seem frustrating to realize this, it is actually a very exciting time to join the party.
Party like it is 1950
The history of the AI field started in the 1950's, with the goal of building computer systems with human-like general intelligence. The John von Neumann computer architecture, first described in 1945, was inspired by the human brain, with inputs, central processing unit, memory, outputs and so on, even though nowadays there is growing evidence that the human brain is not an information processing machine. But in the following decades, a lot of expectations of a near future full of robots (that would do most of the hard work), cheap energy, trips to the moon and other planets, 4 hours work day were raised but never achieved. Since the 1960's, due to the difficulty of this task, AI researchers have focused on what has been called “narrow AI” – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. We have seen computer systems beat humans in chess, Jeopardy and Go games. But still some tasks human intelligence is able to solve in the brains of a 4 years old child, like learning a new language or formulating questions grounded on the environment while exploring it, are far from being achieved. The reason is we do not know exactly what the human brain is doing. In recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. The research has become truly multidisciplinary, and also many new techniques of brain imaging have contributed to new discoveries about how our natural neural networks behave and work under many different conditions. Increasingly, there is a call for a transition back to confronting the more difficult issues of “human level intelligence” and more broadly “artificial general intelligence (AGI).”
Dancing with the music
I hope you do not get this wrong: “narrow AI” can be very relevant, since it produces many incremental/horizontal advances. Machine learning and computer vision, for instance, permeate many of the software products we use today. But let's use a famous metaphor to illustrate what has been going on in the field. Imagine that we are engineers in the early 1930's and someone from the future travels back, meet us and leave a new and unknown device for us to study and somehow reverse engineer: a TV, maybe with a DVD set so there could be something on. At first glance, everyone is amazed with the capabilities of that new technology and how it can reproduce images and sounds of recorded scenes. So the engineers start trying to figure out how it works, examining its parts, maybe in a bottom up way, opening it and looking at its pieces, rewiring them, etc. Suddenly, the engineers learn that the transistors can make up an amplifier if wired together in some ways, and start building many applications to that. They get so excited about it, and there is so much to do with only this capabilities, and it is so much easier to make progress narrowing to this, that the original goal of understanding how the TV+DVD works is put aside. Well, that is kind of what went on with AGI, where the TV+DVD set is the human brain in our analogy, and the transistor are neurons and amplifiers are neural networks. Machine learning is a great tool to learn patterns, and surely this is a feature of the human brain, but it is far from being all or the most important part of it. One of the topics which were discussed at the HLAI16 keynotes, specially by John Laird (Allen Newell's pupil) and Gary Marcus was how Deep Learning and Big Data has both taken an approach completely different from the one which the human brain takes on learning things. Humans do not need to generate hundreds of thousands of examples in order to learn to recognize objects, for instance. How many times do a 6 year old child has to see a dog before learning to recognize any dog in the world?
The top hits
Many other important topics were discussed. One topic that caught my attention as being new (to me) was “narratives”, due to the hypothesis that narrative is not only a successful way of communication, but a specific way of structuring knowledge, that is, the way we humans learn through storytelling. Emotions were again an important topic of discussion. I wish I had seen more on episodic/autobiographical memory.
But wait... If Deep Learning is not the right path for AGI, what is the technology someone want to develop in order to engage in the AGI challenge? Well, according to John Laird, Cognitive Architectures are the new technology capable of providing a framework to build AGI agents. His group has been developing, since 1990, one cognitive architecture called SOAR and applying it to many different tasks, in an effort to build agents that can work on a broad range of problems (instead of a narrow set of tasks), using a wide variety of methods, knowledge, and learning techniques. Laird presented some very impressive advances, like for instance a robot which could learn how to play tower of hanoi with a few sentences explaining how to play, starting from scratch, just like a human being is presented to a new game rules and can start playing right away (instead of having to train his brain with thousands of examples and patterns before learning it).
John Laird's talk about the advances achieved by his research group
Since 2010, the Biologically Inspired Cognitive Architectures Society has been promoting and facilitating the interdisciplinary study of biologically inspired cognitive architectures (BICA), in particular, aiming at the emergence of a unifying, generally accepted framework for the design, characterization and implementation of human-level cognitive architectures. The BICA challenge is known as the challenge to create a general-purpose, real-life computational equivalent of the human mind, using an approach based on biologically inspired cognitive architectures, in a decade, which now to me seems a quite optimistic goal. Every year, the society unite its members in a conference to present advances, and as aforementioned, this year it was part of the HLAI 2016.
Here in Brazil, in the Cogsys research group, led by Prof. Ricardo Gudwin at Unicamp, we are also excited about Cognitive Architectures and have been developing a toolkit aimed at developing agents equipped with them. We already started applying them to some challenging engineering control and optimization problems, like controlling traffic lights in a urban city network, with good results. We have been members of the BICA Society since 2014 and have attended its conferences since then.
The after party
When I think about how it would be to achieve the challenge of building artificial creatures with AGI, my mind really blows. I believe it will be a real singular moment in the history of humanity. It is amazing how little we know about how our brain really works, even though we have been able to do amazing things with it, like colonizing the planet, describing so many laws of physics, traveling to space and elaborating a theory that explains how life evolved from a singular microorganism. I like to think that the human brain was probably the last technology that natural selection conceived, because due to its plasticity there is no need to encode all the knowledge in DNA anymore, as the human brain is able to learn almost every knowledge from experience or from culture (what the past generations experienced), and new specimens build on top of it. It is as if “DNA evolution” became obsolete after the human brain. As our species is a toolmaker, we expand and augment our capabilities with the tools our brains produce. In this sense, maybe AGI will be the last tool our brain will need to develop, because after that even computer coding will become obsolete, as there will be no need to program computers anymore. We are talking about a real vertical progress, spanning a future completely new to be built on top of this new technology/platform/paradigm. Of course there are many issues and discussions about what responsibilities will come with more power, fears and ethical issues concerning the dawn of AGI. However, this is a matter for another article. The way I see it is as a tool that will have the potential to help us solve all the big issues of humanity, from saving the environment to ending inequality. It is all just beginning, and we need more people working on it, right now.
Senior IT Consultant - Architecture, Innovation, R&D
8 年Indeed an excellent article and by the way, I choose Computer Science because it is the most vast and amazing field of research. The reference link to the article "The empty brain - https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer" is also amazing if not, even better to understand some of the real challenges that AI face today. Quote from The Empty Brain: " But here is what we are not born with:information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not bornwith such things, we also don’t develop them – ever.".
Sócio Diretor na Varejo 360 Inteligência e Pesquisa de Mercado
8 年This kind of research reminds me why I chose computer science as a field of study. It is breath-taking. The possibilities are infinite. It feels like the first astronomers must have felt when they realised the size of the cosmos =).