In the Battle for Artificial Intelligence, Winner Takes All
by Stephen S. Hau, CEO of Newfire Partners
Artificial intelligence (AI) is one of the most exciting, rapidly advancing and possibly overhyped fields in technology today. It’s also one of the most frightening (more on that later).
Private investors are pouring tons of cash into AI ventures (over US$5B worldwide in 2017). With every year, tech giants are hitting new heights in making acquisitions, filing patents and committing resources in the battle for AI supremacy.
No doubt, AI has a lot of promise. Already today, nascent AIs can fly drones, beat the best human game players, translate languages, drive cars, trade stocks, develop new drug treatments, discover planets and much more.
Another driver for the intense activity is the huge prize. In the battle for artificial intelligence supremacy, winner takes all. Put another way, the first team that invents a "strong AI" will quickly render all other competitors irrelevant. Some experts have theorized that the first strong AI will also be the last human invention -- because of a strong AI’s ability for rapid self-improvement.
Weak vs. Strong AI
All of today's AI is so-called "weak AI," which has narrow, predefined capabilities. Alexa and Siri are frequently cited examples of weak AI. While able to elegantly interact with humans and very impressive in their own right, they’re also limited in their capabilities. There’s no possibility or expectation that Alexa or Siri as currently constructed would ever perform beyond their well-defined duties.
Weak AI that’s equipped with machine learning may make novel observations and may be better than humans in completing specific tasks. However, it’s still limited to the scope of its design and often constrained by its original assumptive models.
In contrast, "strong AI" (a.k.a. artificial general intelligence) demonstrates human-like ability to reason and grow, mimicking the human mind. Alan Turing surmised that a strong AI device would be able to hold a conversation with a human just like another human could. As sci-fi fans already know, this threshold is referred to as the Turing Test.
Based on advancements in software and hardware (e.g. quantum computing), experts in the field believe that strong AI is achievable within 30 years. Some believe strong AI could emerge even sooner.
Intelligence Explosion
It's generally theorized that once an AI approaches very modest human-level intelligence, it can quickly become ultra-intelligent in a matter of days, hours or sooner, driven by hyper-recursive self-improvement. This prediction is known as the "intelligence explosion," and we’ve already observed an early example of it.
Shortly after Google's AlphaGo Master beat the best human player in the board game Go, the AI was greatly surpassed by its successor, AlphaGo Zero. The latter had no human training, only learning by playing virtual copies of itself and without using human-played games as an initial seed.
The irony, of course, is that the students are now the teachers. Human players who once trained the AI are now desperately trying to learn from the AI. It remains to be seen if the AI's learnings can be meaningfully used to improve human players. Humans describe the experience of playing AlphaGo as playing a distinctly non-human "personality" (misnomer), potentially making knowledge transfer challenging. Consider that even after decades of studying computer chess games, no human chess player has been able to beat a computer designed to win.
The Sky’s the Limit, But …
At this point of this blog, the “winner takes all” outcome should be obvious.
Once an ultra-intelligent, artificial general intelligence exists and is able to self-improve and operate beyond human understanding, it may be directed not just to solve a single problem but all (solvable) problems. It can invent novel ways to improve itself.
As IJ Good reasoned in 1965:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
With AI, the possibilities for improving society are limitless.
At the same time, clearly there is great risk for unintended consequence and bad actors. Since winner takes all, corporations, governments and others all will race each other to be the first. As an industry and a society, we need to design and implement safeguards with equal urgency.
It’s unlikely that the meaningful protections will be as simple and elegant as Asimov’s Laws (also known as the Three Laws of Robotics), which have been both widely popularized by Hollywood and widely criticised by experts as too limited.
I suspect we will discover that the only way to protect the human race from an ultra-intelligent AI is… you guessed it: an ultra-intelligent AI.
To be continued soon...
Technology executive, entrepreneur, and angel investor
1 年These topics seem increasingly relevant (as predicted). What do you think?
Business Development Manager at Tapit - Touch and go | Customer Experience Excellence | Operations Leader | Customer Service & Support Operations | Business Process Improvements
1 年Stephen, thanks for sharing!
Competitive intel for tech companies
5 年Nice article on A.I. When humans use computer analysis in chess, we take for granted that certain computer moves are basically impossible for a human to "find", because they are tied to a chain of reasoning that is simply too deep and complex. That happened at the world championship last year in a controversial game in which a computer "saw" a win, but it was difficult for the best players in the world to "see". When I watch computers play one another, sometimes they appear to do meaningless moves one would not expect from a computer. Sometimes they repeat moves. There is a conspiracy theory out there that the 19th century chess player Paul Morphy was a time traveler because his ability was so far ahead of his time. When his games are analyzed by computer, they are found to be nearly flawless. Perhaps he was a time traveling AI. Are you willing to do a short exploratory virtual meeting? I'd love to connect.
Technology executive, entrepreneur, and angel investor
6 年Very concerning arms race... https://www.wsj.com/articles/the-new-arms-race-in-ai-1520009261
?? Property Investment ?? Property Strategy ?? Investment Property Growth ?? Buy Investment Property?? Melbourne
6 年Interesting to see what can be done with an AI having human instincts.