Does Elon Musk Hate Artificial Intelligence

Does Elon Musk Hate Artificial Intelligence

Elon Musk, the tech billionaire and CEO of Tesla, was quoted as saying Artificial Intelligence (AI) is the “Greatest Risk We Face as a Civilization”. He recently met with the National Governor’s Association and advocated for government involvement and regulations. This seems to be a far cry from the government-should-leave-the-market-alone position high-tech firms normally advocate. At first glance, it seems awkward. The head of Tesla, who has aggressively invested in AI for self-driving cars, is worried about AI and wants bureaucratic regulation?  

Is Musk driven by unwarranted fear or possibly taking this brash position as part of a marketing stunt? What is he actually saying? Well, I think he is being rational.  

Translating Technology Fear 

Mr. Musk is a brilliant technologist, engineer, and visionary (I am a fan of his work). I have never sat down and had a chat with him, but from what I have understand, his concerns seem informed and grounded, as they would for any technology that has great power. AI will bring tremendous value and will extend computing beyond just analysis of data, to manifest in the manipulation of the physical world. Autonomous transportation is a great example where AI will enable vehicles to eventually be in total control. Therefore, life-safety of passengers and pedestrians will be in the balance.   

History teaches many lessons. Alfred Nobel's invention was revolutionary in fueling the global industrial and economic revolutions. It was designed to accelerate the mining of resources and building of infrastructure while improving the safety during transport and use. Ultimately, to Nobel's displeasure, it was also used as the preferred compound for destruction and taking lives in wars across the globe.  

More recently, advances in genetics emerged with the potential of medical breakthroughs and sweeping cures for afflictions that cause massive suffering. But again, such power could be misused and result in unintended consequences (destruction of our species, ravaging planetary ecosystems, etc.). Scientists and visionaries spoke up over a decade ago to support controls that throttled certain types of research. Such regulations and oversight has given the world time to understand certain ramifications and be more cautious as it moved forward with research.   

Race to Destruction 

Business competition is fierce and the race for innovation often casts aside safety. Government involvement can slow down the process, to allow more attention to avoid catastrophes and for society to debate the right level of ethical standards.  

 There was little need to argue for the regulations to be enacted to control the research and development of chemical, biological, and nuclear weapons. It was obvious. Nobody wants their neighbor to be brewing anthrax in their bathtub. But for cases where the risks are not apparent and potentially obscured by the great benefits, it becomes more problematic. Marie Curie, the famed chemist made great advances to modern medicine, with little regulatory oversight, and ultimately died from her discoveries. Nowadays, we don’t want just anyone playing around with radioactive isotopes. There is government oversight. The same is true for much of the medical and pharmaceutical world where research has boundaries to keep the population safe.   

Artificial Intelligence, aside from science fiction movies where computers become self-aware and attempt to destroy mankind, is vague. It can encompass so much, but still be difficult to describe exactly what it can and cannot do. This is where technology visionaries play a role. Some have a keen insight to see the risks. Elon Musk, Stephen Hawking, and Bill Gates have also discussed publicly their concerns for runaway AI.  

“AI’s a rare case where we need to be proactive in regulation, instead of reactive. Because by the time we are reactive with AI regulation, it’s too late," – Elon Musk  

Innovation and Caution 

I believe Musk wants to raise awareness and establish guard-rails to make sure innovation does not recklessly run-away at the detriment of safety, security, and privacy. He is not saying AI is inherently bad. It is just a tool. One which can be used benevolently or with malice, and runs the risk of mistakenly being wielded in ways that create severe unintended consequences. Therefore, his message to legislators is that we must respect the power and move with more forethought as we improve our world.    


Interested in more? Follow me on LinkedInTwitter (@Matt_Rosenquist)Information Security Strategy, and Steemit to hear insights and what is going on in cybersecurity. 


Richard Logie

Getslocal Inc - Getslocal UK Ltd - Getslocal USA LLC - GETS Tech Ltd - GETS Plus LLC

7 年

AI has always been about "Economics over Ethics!" AI only destroys the income of the poor and middles class and transfers wealth to the rich.

Ming LI

Experienced Machine Learning Scientist and Applied Researcher

7 年

He simply raised concern for ethical development as AI is progressing so fast that reactive regulation will never be enough. Stop twisting headlines to seek attention.

回复
Chuck Williams MBA, PMP, CISSP, GSLC, CSSLP, MCSE, CRISC, CDPSE

Veteran & intrapreneur w/ over quarter-century of leading large-scale 1st-in-enterprise efforts for Fortune 100, NFPs, Military

7 年

Cautiously optimistic, or realist?

Cieran Joce ACIAT

Architectural Technologist | MSc in Project Management | Exploring the Intersection of Design & Technology

7 年

No, he's just concerned of what damage could occur. AI is dangerous.

Ken Dickenson

Manager: Network Fraud Management System at Telkom SA Ltd (Retired)

7 年

I think this is a well balanced comment and helps me at least understand that their may be reasons for legislating AI where previously I was somewhat surprised at Elon Musks comment.

要查看或添加评论,请登录

Matthew Rosenquist的更多文章

社区洞察

其他会员也浏览了