From Here to Human-Level Artificial General Intelligence in Four (Not All That) Simple Steps

From Here to Human-Level Artificial General Intelligence in Four (Not All That) Simple Steps

In the 15 years since I first introduced the term “artificial general intelligence” (AGI), the AI field has advanced tremendously. We now have self-driving cars, automated face recognition and image captioning, machine translation and expert AI game-players, and so much more.

However, these achievements remain essentially in the domain of “narrow AI”—AI that carries out tasks based on specifically-supplied data or rules, or carefully-created training situations. AIs that can generalize to unanticipated domains and confront the world as autonomous agents are still part of the road ahead.

The question remains: what do we need to do to get from today’s narrow AI tools, which have become mainstream in business and society, to the AGI envisioned by futurists and science fiction authors?

The Diverse Proto-AGI Landscape

While there is a tremendous diversity of perspectives and no shortage of technical and conceptual ideas on the path to AGI, there is nothing resembling an agreement among experts on the matter.

For example, Google DeepMind’s chief founder Demis Hassabis has long been a fan of relatively closely brain-inspired approaches to AGI, and continues to publish papers in this direction. On the other hand, the OpenCog AGI-oriented project that I co-founded in 2008 is grounded in a less brain-oriented approach—it involves neural networks, but also heavily leverages symbolic-logic representations and probabilistic inference, and evolutionary program learning.

The bottom line is, just as we have many different workable approaches to manned flight—airplanes, helicopters, blimps, rockets, etc.—there may be many viable paths to AGI, some of which are more biologically inspired than others. And, somewhat like the Wright brothers, today’s AGI pioneers are proceeding largely via experiment and intuition, in part because we don’t yet know enough useful theoretical laws of general intelligence to proceed with AGI engineering in a mainly theory-guided way; the theory of AGI is evolving organically alongside the practice.

Four (Not Actually So) Simple Steps From Here to AGI

In a talk I gave recently at Josef Urban’s AI4REASON lab in Prague (where my son Zar is doing his PhD, by the way) I outlined “Four Simple Steps to Human-Level AGI.” The title was intended as dry humor, as actually none of the steps are simple at all. But I do believe they are achievable within our lifetime, maybe even in the next 5-10 years. Better yet, each of the four steps is currently being worked on by multiple teams of brilliant people around the world, including but by no means limited to my own teams at SingularityNETHanson Robotics, and OpenCog.

The good news is, I don’t believe we need radically better hardware, nor radically different algorithms, nor new kinds of sensors or actuators. We just need to use our computers and algorithms in a slightly more judicious way by doing the following.

1) Make cognitive synergy practical

We have a lot of powerful AI algorithms today, but we don’t use them together in sufficiently sophisticated ways, so we lose much of the synergetic intelligence that could come from using them together. By contrast, the different components in the human brain are tuned to work together with exquisite feedback and interplay. We need to make systems that enable richer and more thorough coordination of different AI agents at various levels into one complex, adaptive AI network.

For instance, within the OpenCog architecture, we seek to realize this by making different learning and reasoning algorithms work together on the Atomspace Hypergraph, which allows for the creation of hybrid networks consisting of symbolic and subsymbolic segments. Our probabilistic logic engine, which handles facts and beliefs, our evolutionary program learning engine, which handles how-to knowledge, our deep neural nets for handling perception—all of these cooperate together in updating the same set of hypergraph nodes and links.

On a different level, in the SingularityNET blockchain-based AI network, we work toward cognitive synergy by allowing different AI agents using different internal algorithms to make requests of each other and share information and results. The idea is that the network of AI agents, using a customized token for exchanging value, can become an overall cognitive economy of minds with an emergent-level intelligence going beyond the intelligence of the individual agents. This is a modern blockchain-based realization of AI pioneer Marvin Minsky’s idea of intelligence as a “society of mind.”

2) Bridge symbolic and subsymbolic AI

I believe AGI will most effectively be achieved via bridging of the algorithms used for low-level intelligence, such as perception and movement (e.g., deep neural networks), with the algorithms used for high-level abstract reasoning (such as logic engines).

Deep neural networks have had amazing successes lately in processing multiple sorts of data, including images, video, audio, and to a lesser extent, text. However, it is becoming increasingly clear that these particular neural net architectures are not quite right for handling abstract knowledge. Cognitive scientist and AI entrepreneur Gary Marcus has written articulately on this; SingularityNET AI researcher Alexey Potapov has recently reported on his experiments probing the limits of the generalization ability of current deep neural net frameworks.

My own intuition is that the shortest path to AGI will be to use deep neural nets for what they’re best at and to hybridize them with more abstract AI methods like logic systems, in order to handle more advanced aspects of human-like cognition.

3) Whole-organism architecture

Humans are bodies as much as minds, and so achieving human-like AGI will require embedding AI systems in physical systems capable of interacting with the everyday human world in nuanced ways.

The “whole organism architecture” (WHOA!!!) is a nice phrase introduced by my collaborator in robotics and mayhem, David Hanson. Currently, we are working with his beautiful robotic creation Sophia, whose software development I have led as a platform for experimenting with OpenCog and SingularityNET AI.

General intelligence does not require a human-like body, nor any specific body. However, if we want to create an AGI that manifests human-like cognition in particular and that can understand and relate to humans, then this AGI will needs to have a sense of the peculiar mix of cognition, emotion, socialization, perception, and movement that characterizes human reality. By far the best way for an AGI to get such a sense is for it to have the ability to occupy a body that at least vaguely resembles the human body.

The need for whole organism architecture ties in with the importance of experiential learning for AGI. In the mind of a human baby, all sorts of data are mixed up in a complex way, and the goals and objectives need to be figured out along with the categories, structures, and dynamics in the world. Even the distinction between self and other and the notion of a persistent object have to be learned. Ultimately, an AGI will need to do this sort of foundational learning for itself as well.

While it is not necessarily wrong to supply one’s AGI system with data from texts and databases, one still needs to build a system that interacts with, perceives, and explores the world autonomously and builds its own model of itself and the world. The semantics of everything it learns is then grounded in its own observations. If it learns about something abstract, like language or math, it has to be able to ground the semantics of that in its own life, as well as in the abstraction.

Experiential learning does not require robotics. But whole-organism robotics does provide an extremely natural venue for moving beyond today’s training-by-example AIs to experiential learning.

4) Scalable meta-learning

AGI needs not just learning but also learning how to learn. An AGI will need to apply its reasoning and learning algorithms recursively to itself so as to automatically improve its functionality.

Ultimately, the ability to apply learning to improve learning should allow AGIs to progress far beyond human capability. At the moment, meta-learning remains a difficult but critical research pursuit. At SingularityNET, for instance, we are just now beginning to apply OpenCog’s AI to recognize patterns in its own effectiveness over time, so as to improve its own performance.

Toward Beneficial General Intelligence

If my perspective on AGI is correct, then once each of these four aspects is advanced beyond the current state, we’re going to be there—AGI at the human level and beyond.

I find this prospect tremendously exciting, and just a little scary. I am also aware that some observers, including big names like Stephen Hawking and Elon Musk, have expressed the reverse sentiment: more fear than excitement. I think nearly everyone who is serious about AGI development has put a lot of thought into the mitigation of the relevant risks.

One conclusion I have come to via my work on AI and robotics is: if we want our AGIs to absorb and understand human culture and values, the best approach will be to embed these AGIs in shared social and emotional contexts with people. I feel we are doing the right thing in our work with Sophia at Hanson Robotics; in recent experiments, we used Sophia as a meditation guide.

I have also been passionate in the last few years about working to ensure AI develops in a way that is egalitarian and participatory across the world economy, rather than in a manner driven mainly by the bottom lines of large corporations or the military needs of governments. Put simply: I would rather have a benevolent, loving AI become superintelligent than a killer military robot, an advertising engine, or an AI hedge fund. This has been part of my motivation in launching the SingularityNET project—to use the power of AI and blockchain together to provide an open marketplace in which anyone on the planet can provide or utilize the world’s most powerful AI, for any purpose. If an AGI emerges from a participatory “economy of minds” of this nature, it is more likely to have an ethical and inclusive mindset coming out of the gate.

We are venturing into unknown territory here, not only intellectually and technologically, but socially and philosophically as well. Let us do our best to carry out this next stage of our collective voyage in a manner that is wise and cooperative as well as clever and fascinating.

Originally Posted: From Here to Human Level Artificial General Intelligence From Here to Human-Level Artificial General Intelligence in Four (Not All That) Simple Steps

Bruce Chaplin

Facility Management Consulting | FM Services | Asset Management | FM Strategy | Workplace Services | FM Software

5 年

You’ve sparked my interest Ben, thanks for sharing.

回复
James Lewis

Researcher at representi.com

6 年

4. To develop a machine that can learn how to learn, I think we need to first give it a reason to learn. If we write a program that does data analysis, and then do further work to allow it to teach itself better data analysis, and then create some kind of loop such that it could teach itself to teach itself to do better data analysis, yes that would be an advance (though a scary advance). I would argue, however, that you have not created a thinking machine, a machine capable of its own decisions, a machine capable of adapting, you would have simply created a machine that is better (possibly) at data analysis.

回复
James Lewis

Researcher at representi.com

6 年

Thanks for the article. I'm new to the machine intelligence field and come at it from a different perspective. I'd like to discuss the four points: 1. Bring the different technologies, or skills, together - yes, that would be useful. Speech recognition is very useful, and I'd like to see it integrated somehow with image classification (though at this point I lack the resources to figure out if image classification is useful for my project). 2. This is where, for me, current machine intelligence thinking goes off the rails. Logic is too often accepted as synonymous with "thinking." Wittgenstein had that point of view early in his career ("Tractatus Logico-Philosophicus") and then rejected his earlier hypothesis in "Philosophical Investigations". I feel he left no stone unturned in his destruction of the Tractatus, and then we have to take into account the work of other twentieth century philosphers to complete a picture of what "thinking" actually is. 3. I don't think "Whole-organism architecture" is really necessary for building a thinking machine, though I do think a sensory, moving type of machine, like a probe or something of that nature, would be a very useful application of a machine intelligence. exceeded character limit :)

回复
Alexander Borzenko

Chief Scientist at Proto-Mind Machines Inc.

6 年

The maximum intelligence level of any system is the intelligence level of the best intellect in the system.?Even a million of morons can't beat the intellect of Einstein.

回复

It does seem that embedding agi within human networks helps to open up greater possibilities for cross pollenation and a method to scale with complexity, and the potential to shed light that otherwise might not reach individual labs or minds - community as laboratory. I wondered if anyone had made progress or attempted to biomimetically approach decision based on chemical underpinnings of emotion? For example, ocytocin, serotonin, dopamine. If some aspects of synaptic processing are easier to develop in the dichotomy of zeroes and ones, I do wonder if suffusing a neural network with reward could help. No doubt a serendipitous question, but thanks for inspiring it. As Franz Och and later researchers leaped over limitations of translation by comparing human translated texts, it makes me wonder if a neural network community could develop creative models for rewards and biomimetic algorithms, and see if any of the current models or even AutoML might bring out something interesting on the other side, comparing stories developed by humans and algorithms with behavioral sequences that wander outside of logic

要查看或添加评论,请登录

社区洞察

其他会员也浏览了