High-octane Machines
Cognitive technology can encompass anything from search engine algorithms to autonomous weapons to aircraft navigators.
Artificial intelligence today is properly known as?narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many?researchers is to create?general AI (AGI or strong AI). While narrow AI may outperform humans at?whatever its specific task is, like playing chess or solving equations, AGI would outperform?humans at?nearly every cognitive task.
Why research AI safety?
In the near term, the goal of keeping?AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as?verification,?validity,?security and?control. Whereas it may be little more than a minor nuisance if?your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you?want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is?preventing?a devastating?arms race in lethal autonomous weapons.
In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by?I.J. Good?in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an?intelligence explosion leaving human intellect?far behind. By inventing revolutionary new technologies, such?a super-intelligence might?help us?eradicate war, disease, and poverty, and so the creation of?strong AI might?be the biggest event in human history.?Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI?with ours before it becomes super-intelligent.
There are some who question whether strong?AI will ever be achieved, and others who insist that the creation of super-intelligent AI is guaranteed?to be beneficial. I believe research today will help us better prepare for and prevent such potentially negative?consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.
How can AI be dangerous?
Most researchers agree that a super-intelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent.?Instead, when considering how AI might?become a risk, experts think two scenarios most likely:
As these examples illustrate, the concern about advanced AI isn’t malevolence but competence.?A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we?have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.
Why the recent interest in AI safety?
Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern?in the media?and via open letters about the?risks posed by AI, joined by many leading AI researchers. Why is the subject suddenly in the headlines?
The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of super-intelligence in our lifetime.?While some experts still guess that human-level AI is centuries away, most AI researches at the?2015 Puerto Rico Conference?guessed that it would happen before 2060. Since it may take decades to complete the required safety research, it is prudent to start it now.
Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?
The top myths about advanced AI
A captivating?conversation is taking place about the future of artificial intelligence and what it will/should mean for humanity.?There are fascinating controversies where?the world’s leading experts disagree, such as: AI’s future impact on the job market; if/when human-level AI will be developed; whether this will lead to an intelligence explosion; and whether this is?something we should welcome?or fear.?
领英推荐
Timeline myths
The first myth regards the timeline: how long will it take until machines greatly supersede human-level intelligence? A common misconception is that we know the answer with great certainty.
One popular myth is that we know we’ll get superhuman AI this century.?In fact, history is full of technological over-hyping. Where are those fusion power plants and flying cars we were promised we’d have by now??AI has also been repeatedly over-hyped in the past, even by some of the founders of the field.
On the other hand, a popular counter-myth is that we know we won’t get superhuman AI this century.?Researchers have made a wide range of estimates for how far we are from superhuman AI, but we certainly can’t say with great confidence that the probability is zero this century, given the dismal track record of such techno-skeptic predictions. For example, Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 — less than 24 hours before Szilard’s invention of the nuclear chain reaction — that nuclear energy was “moonshine.”?The most extreme form of this myth is that superhuman AI will never arrive because it’s physically impossible.?However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.
There have been a number of surveys asking AI researchers how many years from now they think?we’ll have human-level AI with at least 50% probability. All these surveys have the same conclusion: the world’s leading experts disagree, so we simply don’t know.?For example, in such a poll of the AI researchers at the?2015 Puerto Rico AI conference, the average (median) answer was by year 2045, but some researchers guessed hundreds of years or more.
There’s also a related myth that people who worry about AI think it’s only a few years away. In fact, most people on record worrying about superhuman AI guess it’s still at least decades away. But they argue that as long as we’re not 100% sure that it won’t happen this century, it’s smart to start safety research now to prepare for the eventuality. Many of the safety problems associated with human-level AI are so hard that they may take decades to solve. So it’s prudent to start researching them now rather than the night before some programmers drinking Red Bull decide to switch one?on.
Controversy myths
Another common misconception is that the only people harboring concerns about AI and advocating AI safety research are luddites who don’t know much about AI. When Stuart Russell, author of the?standard AI textbook, mentioned this during?his Puerto Rico talk, the audience laughed loudly. A related misconception is that supporting AI safety research is hugely controversial. In fact, to support a modest investment in AI safety research, people don’t need to be convinced that risks are high, merely non-negligible — just as a modest investment in home insurance is justified by a non-negligible probability of the home burning down.
It may be that?media have made the AI trust debate seem more controversial than it really is.?After all, fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones.?As a result, two people who only know about each other’s positions from media quotes are likely to think they disagree more than they really do.
Myths about the risks of superhuman AI
Many AI researchers roll their eyes when seeing?this headline:?“Stephen Hawking warns that rise of robots may be disastrous for mankind.”?And as many have?lost count of how many similar articles they’ve?seen.?Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil.?On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers?don’t?worry about. That scenario combines as many as three separate misconceptions: concern about?consciousness,?evil,?and?robots.
If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car??Although this mystery of consciousness is interesting in its own right, it’s irrelevant to AI risk. If you get struck by a driver-less car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what super-intelligent AI?does, not how it subjectively?feels.
The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A super-intelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.
The consciousness misconception is related to the myth that machines can’t have goals.?Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.?If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.?If that heat-seeking missile were chasing you, you probably wouldn’t exclaim:?“I’m not worried, because machines can’t have goals!”
The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter.
The interesting controversies
Having discussed the above-mentioned misconceptions, lets now focus on true and interesting controversies where even the experts disagree. What sort of future do you want? What would you like to happen with job automation??What career advice would you give today’s kids??Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth? Will we control intelligent machines or will they control us??Will intelligent machines replace us, co-exist with us, or merge with us??What will it mean to be human in the age of artificial intelligence? What would you like it to mean, and how can we make the future be that way? Please join the conversation!
Student at University of Nairobi
5 年Quite an interesting and informative read! Thanks very much for sharing!