Elon Musk has a complex relationship with the A.I. community Investing in A.I.
Musk’s relationship with AI goes back several years and he certainly has an eye for promising AI start-ups.
He was one of the first investors in Britain’s DeepMind, which is widely regarded as one of the world’s leading AI labs. The company was acquired by Google in January 2014 for around $600 million, making Musk and other early investors like fellow PayPal co-founder Peter Thiel a tidy return on their investments.
But his motives for investing in AI aren’t purely financial. In March 2014, just two months after DeepMind was acquired, Musk warned that AI is “potentially more dangerous than nukes,” suggesting that his investment might have been made because he was concerned about where the technology was headed.
The following year, he went on to help set up a new $1 billion AI research lab in San Francisco to rival DeepMind called OpenAI, which has a particular focus on AI safety.
Musk has another company that’s looking to push the boundaries of AI. Founded in 2016, Neuralink wants to merge people’s brains and AI with the help of a Bluetooth enabled processor that sits in the skull and talks to a person’s phone. Last July, the company said human trials would begin in 2020.
In many ways, Musk’s AI investments have allowed him to stay close to the field he’s so afraid of.
‘This is really the scariest problem to me’
As one of the most famous tech figures in the world, Musk’s alarmist views on AI can potentially reach millions of people.
A number of other tech leaders including Microsoft’s Bill Gates believe superintelligent machines will exist one day but they tend to be a bit more diplomatic when they air their thoughts to a public audience. Musk, on the other hand, doesn’t hold back.
In September 2017, Musk said on Twitter that AI could be the “most likely” cause of a third world war. His comment was in response to Russian President Vladimir Putin who said that the first global leader in AI would “become the ruler of the world.”
Earlier in the year, in July 2017, Musk warned that robots will become better than each and every human at everything and that this will lead to widespread job disruption.
“There certainly will be job disruption,” he said. “Because what’s going to happen is robots will be able to do everything better than us ... I mean all of us. Yeah, I am not sure exactly what to do about this. This is really the scariest problem to me, I will tell you.”
He added: “Transport will be one of the first to go fully autonomous. But when I say everything — the robots will be able to do everything, bar nothing.”
Musk didn’t stop there.
“I have exposure to the most cutting edge AI, and I think people should be really concerned by it,” he said. “AI is a fundamental risk to the existence of human civilization.”
The cutting edge AI he refers to is likely being developed by scientists at OpenAI, and possibly some at Tesla too.
Rather awkwardly, OpenAI has tried to distance itself from Musk and his AI comments on numerous occasions. OpenAI employees don’t always like to see “Elon Musk’s OpenAI” in headlines, for example.
Feud with Zuckerberg
Some people in places like Cambridge University’s Centre for the Study of Existential Risk or Oxford’s Future of Humanity Institute might not disagree with all of Musk’s comments.
But his comments in July 2017 were the final straw for some people.
In a rare public disagreement with another tech leader, Facebook CEO Mark Zuckerberg accused Musk of fear-mongering and said his comments were “pretty irresponsible.”
Musk responded by saying that Zuckerberg didn’t understand the subject.
Doubling down
In March 2018, at South by Southwest tech conference in Austin, Texas, Musk doubled down on his comments from 2014 and said that he thinks AI is far more dangerous than nuclear weapons, adding that there needs to be a regulatory body overseeing the development of superintelligence.
These relatively extreme views on AI are shared by a small minority of AI researchers. But Musk’s celebrity status means they’re heard by huge audiences and this frustrates people doing actual AI research.
Activity created by Comfy Languages Team
Reading comprehension
TRUE OR FALSE?
1) Ion Musk was interested both in start-ups showing possibility and the ones that didn't.
2) Inside the science behind Elon Musk was his plan to put computer chips in people’s brains.
3) Even if Ion Musk has a great concern for AI and its possible effects on civilization, he suggests that there are not high chances of it causing a third world war.
4) Ion musk feared the negative impact of AI and robots on the job market.
5) Ion Musk is really concerned about the powerful AI system. However, he doesn′t think that there is an urgency to regulate AI.
Answer Key:
1) False. Musk’s relationship with AI goes back several years and he certainly has an eye for promising AI start-ups.
2) TRUE. Musk has another company that’s looking to push the boundaries of AI. Founded in 2016, Neuralink wants to merge people’s brains and AI with the help of a Bluetooth enabled processor that sits in the skull and talks to a person’s phone.
3) FALSE. In September 2017, Musk said on Twitter that AI could be the “most likely” cause of a third world war.
4) TRUE. “Because what’s going to happen is robots will be able to do everything better than us ... I mean all of us.” // “I have exposure to the most cutting edge AI, and I think people should be really concerned by it,” he said. “AI is a fundamental risk to the existence of human civilization.”
5) FALSE. In March 2018, at South by Southwest tech conference in Austin, Texas, that he Musk doubled down on his comments from 2014 and said thinks AI is far more dangerous than nuclear weapons, adding that there needs to be a regulatory body overseeing the development of superintelligence.