Future of Artificial Intelligence
Vanshika Munshi
Senior Consultant-Client Relationship & Delivery Management at HuQuo
How to develop an idea of what the future of artificial intelligence might look like?
When thinking about the future of artificial intelligence, I find it helpful to consider two different concepts in particular: human-level AI, and transformative AI.2?The first concept highlights the AI’s capabilities and anchors them to a familiar benchmark, while transformative AI emphasizes the impact that this technology would have on the world.
From where we are today, much of this may sound like science fiction. It is therefore worth keeping in mind that the majority of surveyed AI experts?believe?there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.
The advantages and disadvantages of comparing machine and human intelligence
One way to think about human-level artificial intelligence is to contrast it with the current state of AI technology. While today’s AI systems often have capabilities similar to a particular, limited part of the human mind, a human-level AI would be a machine that is capable of carrying out the same?range?of intellectual tasks that we humans are capable of.?It is a machine that would be “able to learn to do anything that a human can do,” as Norvig and Russell put it in their textbook on AI.
Taken together, the range of abilities that characterize intelligence gives humans the ability to solve problems and achieve a wide variety of goals. A human-level AI would therefore be a system that could solve all those problems that we humans can solve, and do the tasks that humans do today. Such a machine, or collective of machines, would be able to do the work of a translator, an accountant, an illustrator, a teacher, a therapist, a truck driver, or the work of a trader on the world’s financial markets. Like us, it would also be able to do research and science, and to develop new technologies based on that.
领英推荐
The concept of human-level AI has some clear advantages. Using the familiarity of our own intelligence as a reference provides us with some clear guidance on how to imagine the capabilities of this technology.?
However, it also has clear disadvantages. Anchoring the imagination of future AI systems to the familiar reality of human intelligence carries the risk that it obscures the very real differences between them.?
Some of these differences are obvious. For example, AI systems will have the immense memory of computer systems, against which our own capacity to store information pales. Another obvious difference is the speed at which a machine can absorb and process information. But information storage and processing speed are not the only differences. The domains in which machines already outperform humans is steadily increasing: in chess, after matching the level of the best human players in the late 90s, AI systems?reached?superhuman levels more than a decade ago. In other games like Go or complex strategy games, this has happened more recently.
These differences mean that an AI that is at least as good as humans in every domain would overall be much more powerful than the human mind. Even the first “human-level AI” would therefore be quite superhuman in many ways.
Human intelligence is also a bad metaphor for machine intelligence in other ways. The way we think is often very different from machines, and as a consequence the output of thinking machines can be very alien to us.
Most perplexing and most concerning are the strange and unexpected ways in which machine intelligence can fail. The AI-generated image of the horse below provides an example: on the one hand, AIs can do what no human can do – produce an image of anything, in any style (here photorealistic), in mere seconds – but on the other hand it can fail in ways that no human would fail.?No human would make the mistake of drawing a horse with five legs.