Artificial General Intelligence: A (Re-)Definition

Artificial General Intelligence: A (Re-)Definition

Over the past seven decades, many new "Artificial Intelligence" (AI) technologies have risen and fallen in popularity. Many of these technologies propose a technique or methodology for enabling machines to do some well-defined tasks needed in a business, healthcare, military, or other environment. What they don't do, however, is provide a viable pathway to human-level machine intelligence. But, that's okay. Most of the problems businesses are trying to solve don't require that level of sophistication. Simple automation can cover the vast majority of those problems, and some form of "AI" (machine learning, deep learning, cognitive computing, etc.) can cover a large set of the remainder. Businesses are interested in solutions, not hypotheticals. These technologies provide pretty good solutions for most business use-cases.

Our goal, however, is to build computer systems that can attain the intelligence equivalence of humans. Can some configuration of current AI systems ever attain human-level intelligence? Are some other technologies, techniques, or methodologies needed to create machines with human-level intelligences? Does such a system have an economic value for businesses, which -after all- will fund further research and development? Are these even the right questions to ask?

These are important questions for the people that I work with at Intelligent Artifacts. The following is how we discuss this internally. Occasionally, a customer of ours will raise some form of these questions to us, stirring a lively discussion that always takes us way past our scheduled meeting time! But, they are always fun and insightful. I hope to provide this clarity that we have regarding this problem to them, and to others that may share our passion in this field. I will choose some terminology, maybe re-define them, and completely ignore others that I don't feel provides good insights to the discussion. I make no apologies for this since the goal is not to stick to a tradition formed by the accidents of history, but to provide an understanding with clear hindsight.

The right question to find the right answer

In the field of AI, there is a sub-field called "Artificial General Intelligence" (AGI). Unfortunately, Stuart Russell and Peter Norvig’s “Artificial Intelligence: A Modern Approach”, arguably the world’s most authoritative text on machine intelligence, doesn’t provide a definition of AGI. According to Wikipedia, AGI "is the intelligence of a machine that could successfully perform any intellectual task that a human being can." This unfortunate definition is commonly held for AGI. It is also often conflated with the term "Strong AI", which is a philosophical distinction about the authenticity (or genuineness) of the intelligence. As defined, this poor definition perpetuates the misconceptions of computer systems that can spontaneously achieve "consciousness" and take over the world like Skynet. There are many real-world ethical issues that people are trying to solve, but their efforts seem wasted to me because their underlying beliefs are flawed. I believe this flaw stems from the poor definition of what AGI actually is versus what it is not.

Bertrand Russell is quoted as saying, "The greatest challenge to any thinker is stating the problem in a way that will allow a solution.” It is important to do that here. I would like to re-state the definition of AGI into a form that will allow us to build systems capable of human-level intelligence. Then, we can explore that definition further to understand how it would be possible to create those systems and avoid dead-ends.

First, I'd like to restructure where AGI fits in the world of AI. Rather than as a sub-field of AI, I believe the appropriate positioning is that AI systems are special cases of AGI systems. This should be obvious from the "General" term in AGI! To interpret the meaning of this, one needs to understand that general solutions to a set of problems inherently contain the subset of one or more special cases. A general theory should, therefore, be able to solve multiple special cases, but special cases cannot solve outside of their domains. For example, "General Relativity", GR, in which acceleration (hence gravity) is included in the formulation of the theory, includes the subset of "Special Relativity", SR, cases where acceleration was not included.

This new vantage point is useful in discerning what's possible and what's not as we search technologies to build our human-level machine intelligences. For example, knowing that AI solutions are a subset of AGI solutions, one may believe that building multiple special AI case solutions achieves AGI. But, if one understands the scope of possible AI solutions, then one will realize that it is a fool's errand to try to create human-level intelligence by building hundreds of special case solutions. Some of the smartest people have been trying to do just that! IBM Watson's solution has been a Herculean effort to develop many, many special AI use cases in the hopes that they would form a complete set of solutions (or, at least, a complete-enough set of solutions for their customers' business needs). This is a reasonable approach if the scope is limited. But, if the scope or claim is to achieve human-level machine intelligence as IBM has claimed, then this approach does not scale and is, therefore, not sustainable. In fact, it seems that even the limited scope of business use cases is still too large a space to build a scalable solution using special AI.

A definition emerges

You may have noticed that I have been sneaking the term "special AI" above. Indeed, my belief is that "intelligence" is inherently general! We associate adaptability with intelligence precisely because true biological intelligence is general. The same system should be able to adapt to new problem domains. The "artificiality" of it is merely an indication that it is man-made rather than having emerged from nature. So, I would prefer to make the distinctions between "Special Artificial Intelligence" and "General Artificial Intelligence", rather than use the term "Artificial General Intelligence". In that way, we could then examine graduations of the degrees in which an AI is general or specific in functionality, having assumed that the goal of intelligence is to be general! But, now I'm just mincing words without adding much new value, so let's just stick with the conventional term, "AGI". (Besides, I don't want to anger my marketing department any further!)

Now, let's define AGI as a machine intelligence system that can function in multiple domains. This system should be able to digest data from any domain (text, images, voice, etc.), process that data to make predictions, decisions, and/or take actions that will put the system in a more favorable state.

We haven't yet brought up "human-level" machine intelligence. At this point, all we're interested in is building a system that is capable of working in multiple and disparate special cases. We don't care if the intelligence is at the level of a mouse or a monkey. We just want a framework that makes it possible for us to build AIs that can be used in different domains without much re-tooling, if any.

A repeatable building block of intelligence processes

Next, we want to modularize the above system in such a way that adding more of these "cognitive processors”, i.e. a universal intelligence process distilled into an accessible unit, will provide functionality similar to having more brain matter. But, we don't want this to be some amorphous grouping of brain matter! What we want is to replicate the connectome structures which allow different parts of the brain to further process data that has already been processed by units closer to the sensory edge. Now, we have a repeatable functional unit, so we don't need to invent new solutions to scale vertically or horizontally.

These structures can form hierarchical or any network topologies. What's the right structure? We may gain some useful insights by analyzing the human connectome topology. This may be useful for, say, a robot, but most real-world business use cases will likely not benefit from that specific topology. Randomly selecting from an infinite space of topologies will probably also not get us anywhere productive. So, we now want to include a method for automatically finding the right topologies. One time-proven method is using genetic algorithms to evolve simple topologies into more complex forms within the environment of the use case. At this point, this proves very easy to do. By providing our solutions with DNA derived from variable parameters and connections in their directed graphs, genetic algorithms can automatically find optimal or near-optimal solutions without any engineering effort.

That's the solution nature has chosen to evolve simple jellyfish brains into more complex brains like rats, octopuses, and humans. By having distilled the fundamental intelligence process inside the "cognitive processor", the scope of solutions become much more achievable in reasonable time-frames for evolving these AGI solutions.

Automate every aspect of the solution

Next, it's important to build this into an easy-to-use platform. The ease-of-use of the platform isn't just for the benefit of developers or engineers. It is also to make the automation of every part of the system more attainable. Users should be able to breed good solutions together. Thus, there is no risk of flicking a switch on one day - and ending the world. The progress is - and always has been - incremental. Human-level machine intelligence will eventually emerge from such a system if we provide similar environmental pressures in evolving the solutions that humans have faced in nature.

Evolve within the business environment

Lastly, it is important to work in an ecosystem that provides the opportunity for further enhancement of the AGI platform, and evolution of solutions towards human-level intelligences. In the capitalist economic structure in which we live and work, the solution pathway requires building products that are useful for businesses to become more competitive or efficient. The alternative academic approach often has a vast disconnect between the pristine lab environment and the noisy, chaotic real world. Animal brains have evolved to make sense of the chaos so that they can survive and reproduce. In my belief, building solutions that work successfully in conventional business environments is a critical ingredient to AGI’s recipe for building human-level machine intelligences.

Businesses benefit from this synergy, too. In fact, it has become clear to us that special AI solutions cannot scale to meet the needs of businesses. Businesses require AGI to keep costs low, to discover cross-silo efficiencies, and to reach their markets faster.

What is outlined above is not simply theoretical work. At Intelligent Artifacts, we have built this platform and have run it in production systems for nearly a decade. Our progress has been methodical and deliberate. Like lesser-lifeforms of pre-history, we've made sure that our platform's solutions are useful for our customers before evolving them further. We have not reached human-level machine intelligence, yet. But, we have a clear, viable pathway towards that end-goal. Our solution scales horizontally to cover all the special AI cases and are industry/sector agnostic . It also scales vertically to improve performance in each use case.

As far as we know, no other organization has developed a system with the sophistication, performance, or potential of ours. They may be out there, and have been quietly building their own awesome platform as we have done for the last ten years. If you are, we at Intelligent Artifacts salute you. We know that our purpose isn't simply to create a useful product. It is to change the human experience. It is to put civilization on a new trajectory. Whether you get there first, or we do, the end result is the same. Everyone benefits from this brave new future.

Disclaimer: I am the creator of the General Artificial Intelligence using Software, GAIuS, framework and the Genie? platform. In 2009, I founded Intelligent Artifacts to continue building the GAIuS and Genie solution in real-world environments while consulting and licensing agents built from the system. After reaching the goals of those projects, we launched Genie for commercial use in 2016 to provide the platform to a larger customer base.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了