What The AI Advocates Should Learn From Nuclear Tech

What The AI Advocates Should Learn From Nuclear Tech

Nobel laureate Richard P Feynman was just 26, twenty whole years younger than what Elon Musk is today, when he witnessed the first nuclear explosion in Los Alamos on May 7 1945, as a part of Oppenheimer's team of scientists who worked on the top secret Manhattan project.

On his way back home that day, he saw a bunch of workers constructing a bridge. And he writes what went through his mind in that moment: "I said to myself" writes Feynman, " Folks, don't be crazy. Just stop it. Stop building anything. There is no point, no purpose. Man has found something that will destroy everything that you are building"

We are all lucky to be alive today , at least so far, belying Feynman's prophecy. And it has taken a lot of hard work and many close misses on the way over the last seventy years. Equal chances that we could have been extinct by now.

Elon Musk's 'scary warning' about Artificial Intelligence last week portrays a similar sentiment and even portends a similar future for AI. "I have access to some of the best, cutting edge AI", he said a few days back. “And its scary. AI is a fundamental existential risk to humanity”

In the 1940s, nuclear tech was the future answer to possibly every challenge the world faced, like AI is being projected today. Nuclear tech could solve the energy crisis of the whole planet and even light up Mars if required, it can cure all diseases (through controlled radiation), it could solve the food crisis of the entire globe through plant mutation and what not. It promised much, much more than what AI is promising today. And it was not even this nebulous, woolly thing like deep learning and stuff. It was material, it produced heat and sound and it killed cancer cells in front of our eyes.

But in less than a few years, Hiroshima happened and the world perception on atomic energy took a bleak turn. It irreversibly turned into a Frankenstein’s monster when Soviet Union exploded their bomb in 1949 setting the stage for a very scary cold war era.

Since that day, the whole world has been fighting back very hard to contain the spread of nuclear technology, ironically, much harder than it works on the advancement of the godsend that was. It went to a ridiculous extent that even my college in a sleepy corner of Chennai could not import a four CPU computer because 'it could be used for nuclear research (sic)'. Even as nuclear power plants were being shut down all around the globe, Fukushima scared the daylights out of us and we started to secretly wish that nuclear tech was never invented in the first place.

Will AI follow the same path? Sad if it does. One drastic attack on humanity by the bad guys (or more probably by the 'good' guys as it happened in Hiroshima) using AI can turn it from an angel in to a monster. Is the news of UAE hacking Qatar cyber networks an early flag of this?

Or simply the 'environmental effects' like job losses, loss of tax revenue to governments, social upheavals etc., destroy the value of AI in the immediate term, if not countered well? Will the whole of humanity again wish AI was never invented in the first place?

I personally wish AI succeeds where Nuke-Tech failed the humanity. But we will be sitting ducks if we don’t heed to the warnings of Mr Musk.

Counter intuitively, I agree with Elon Musk that there is a significant role for the governments and institutions to play in regulating AI. It cant wait for a rouge actor, whether it be a state or corporate, to drop the first big bomb. Data oligarchs like Facebook, Twitter, Google, Apple, UIDAI and Reliance Jio need to be very tightly regulated. After all they just cant runaway with our data and train the machines to find our own next move without our consent. Governments should have a majority representation on the board of these companies and others that do leading edge work on AI.

I know that it immediately evokes very bad feeling in us the technologists and the free market advocates. Red tape is the last thing that we want to see. But humanity is much larger than the nerds. And it better be saved. Now. Because, to quote Maynard Keynes " In the long term, all of us are dead anyway".

要查看或添加评论,请登录

Renga Bashyam的更多文章

社区洞察

其他会员也浏览了