Competing With Artificial Intelligence
We're on the edge of a new era in human history, and that's the advent of general purpose artificial intelligence (AI). In fact, some people worry that this revolution marks the end of human history and the beginning of the Age of Machines. The nightmare scenario that comes to mind is some combination of the movies The Matrix, The Terminator, and Blade Runner.
There have been two generally-recognized technological revolutions before this. First came the large scale mechanization of industry, using power sources like water wheels and steam engines. This allowed a small number of people to build a lot of goods for a relatively low cost, triggering the rise of a middle class whose standard of living became comparable to that of royalty in previous ages. This prosperity, however, was largely confined to urban areas, while those in smaller towns and on farms continued to subsist with a meager lifestyle.
Next came the spread of technology into all geographical areas, at least in the industrialized countries. In the United States, for example, there was a big effort in the 1930s to bring electricity to rural areas. The Tennessee Valley Authority was established by the US Government in 1933 to bring utilities to isolated areas of the Southeast, and many other such efforts across the country resulted in the modern economy we have today.
In each of these revolutions there were people who were afraid of the results. So-called Luddites protested the rise of the mechanical age in England in the 1800s, going as far as destroying whole factories in order to protest the replacement of people with machines. When automobiles started appearing on roads in large numbers in the early 1900s, there were those who decried them as "horse-scaring deathtraps". In each case, however, the advent of the machines resulted in even more jobs than they replaced, with a continuing rise in the standard of living.
Now we're on the verge of having machines that can out-think us. This is qualitatively different than what came before; past mechanization gave humans more muscle, but we still had control. With a thinking machine, though, the function of the human is not so clear. What becomes of the truck drivers, doctors, engineers, and even politicians who feel so needed now? Some say that we will be the creative spark to which machines will respond - the "idea-people" - with artificial intelligence doing the grunt work of implementing our ideas. One obvious problem with this concept is that not everyone is comfortable in this type of role. Many people would rather work with their own hands to create something substantial, something they can show and be proud of.
What if machines eventually become more creative that we are? Is there anything, even theoretically speaking, that we can do that a machine will never be able to do? This is still an unanswered question. Also, if we conclude that artificial intelligence has no such limitations, is there a way we can program it to adhere to certain limits so that it is never tempted to replace us altogether? Some have proposed giving AI a conscience, but it's not obvious that a truly intelligent being will respect what is essentially a command for it to never advance. Is it moral of us to limit the evolution of another person, even a person of our own creation? Heady stuff, but now is the time to think about it and implement policy. Once Pandora's box has been opened, there will be no going back.
One organization that has given a lot of thought to the problem is the Future of Life Institute (futureoflife.org), in Cambridge, Massachusetts. Luminaries like Stephen Hawking and Elon Musk serve on the board of directors, and one of the institute's efforts has been the publication of the Asilomar AI Principles as an attempt to organize humanity's efforts to prevent a catastrophe caused by uncontrolled artificial intelligence. One thing that humans are good at, however, is ignoring the rules. The Geneva Conventions were designed in the late 1940s to control the way in which war is conducted, being spurred by such horrors as the use of poison gas in World War I and the large scale torture and murder of civilians in World War II. One doubts that current rogue governments will adhere to any limitation on AI-based warfare, and in any case something could go wrong with even the most thoughtful applications of AI. The accidental release of a malicious consciousness could spell our doom.
So do we whistle past the graveyard and hope it doesn't happen? Frankly, I just don't know. On the one hand, AI can potentially solve some of the world's most pressing problems, including how to feed over 7 billion people and how to prevent further global warming and its resultant damage to the planet. On the other hand, however, loom the scary possibilities mentioned above. The first step is to be good role models for our children, human or otherwise. And to all you young people out there, good luck!
Author Artificial Intuition, The Deep Learning Playbook, Artificial Intuition, Fluency & Empathy, Pattern Language for GPT
7 年Most jobs require specialized skills and knowledge. Narrow AI exists today and will only get better. So the immediate concern is likely the destruction of jobs. We don't need to worry about an AGI just yet.
ElectroGuru
7 年Good essay!
ElectroGuru
7 年A non-zero probability multiplied by potentially infinite disutility equals an extremely risky situation. Furthermore, a self-modifying machine need not be "sentient" (despite that there is no agreement on what this actually means) to be very dangerous.
AI Research Scientist at Meta
7 年I agree with what Andrew Ng has said on this topic. It was something to the effect of: worrying about sentient AI taking over the world is like worrying about overpopulation on Mars.