The Philosophy of Dissent in the Age of Artificial Intelligence
From left front Norris Bradbury, John Manley, Enrico Fermi and J.M.B. Kellogg, second row Oppenheimer (wearing jacket and tie)

The Philosophy of Dissent in the Age of Artificial Intelligence

A few scientists have left Google in protest over selling artificial intelligence (AI) technologies to the US military. The goal of the protesters was to send a message to Google management and to the world about the danger of AI in the military.

This reminded me of a story of protest in the history of nuclear weapons.

On July 16 in 1945, at the Trinity test site in New Mexico, the Manhattan Project demonstrated its success. The scientists watching saw something awesome - in the true sense of the word - causing the wartime head of Los Alamos Labs, J. Robert Oppenheimer, to quote from the Bhagavad Gita “Now I am become Death, the destroyer of worlds.” The greatest science and engineering team ever assembled had created the world’s first nuclear weapon. Within a few months, two bombs were used on Hiroshima and Nagasaki.

Robert Oppenheimer visiting ground zero with Major General Leslie Groves and others at the Trinity site a few weeks after the first nuclear test.

Despite the end of World War II, the project wasn’t over. There was a theory that a much more powerful bomb, a fusion bomb, could be created. After seeing the horrors of the decimated Japanese cities, some of the scientists revolted, resigned, and protested: they didn’t think we should build a fusion weapon, even if it was possible.

In the end, the pro-fusion team, led by the controversial Edward Teller, won the battle. The United States marched forward and built the world’s first fusion weapon. The first full-scale thermonuclear explosion, “Ivy Mike,” was on a tiny island in the South Pacific in 1952, less than a decade after the Trinity test. That weapon’s yield was about 10 megatons, over 450 times greater than either of the weapons dropped on Japan.

Russia quickly caught up and a nuclear arms race ensued. In 1963, we came close to using the new bombs during the Cuban Missile Crisis. Thankfully, Kennedy and Khrushchev (shown here: John F. Kennedy and Nikita Khrushchev in 1961) ?backed off. To this day, the only time a nuclear weapon was used in war was at the end of World War II.

Technology has a habit of marching forward, regardless of who is leading the march. Had the US not pursued the Manhattan Project, some other country would have. Perhaps it would have taken an extra decade or two, but much of the science to create a nuclear weapon was done at the beginning of the 20th century. After the formulas and components were tested, it was just a matter of putting the parts together with some clever and collaborative engineering. I’m not saying that it’s easy. The Manhattan project is the greatest science+engineering achievement of the 20th century. The point is, someone - likely one of our enemies - would have done it eventually.

Despite the inevitable progress of technology, the dissenters at both Los Alamos and Google serve a useful function. They cause us to re-examine our philosophy in light of new technology. The scientists who protested thermonuclear weapons were successful in bringing the issue to the fore. They caused a debate that eventually led to the banning of nuclear tests and helped to prevent nuclear war.

Fast-forward to today. A group of scientists resigned from Google in protest. I doubt their actions will have any impact on the ability of the US or any other government to incorporate AI technology into the military. The good news is, they’ve forced a very public debate on how AI should be used. 

Google collects billions, maybe trillions, of points of information every day about people around the globe. Should they be involved in military technology? Military research and technology often becomes valuable civilian tech. We often like to think that military and civilian technology are different realms, but they are deeply interlinked. After all, Google wouldn’t have even been possible without the precursor to the Internet, ARPANet, a military-funded technology.

Luckily, we do sometimes limit what we do with technology. The USSR exploded the largest nuclear bomb in history, Tsar Bomba, which originally was planned for a yield of 100 megatons. The Russian scientists on that project realized an explosion of that size was dangerous and unnecessary - and reduced the yield by half. It remains the largest explosion ever because we chose to stop there. The virtue of history and its lessons lie in reminding us where to draw lines - sooner and with more awareness. Just because we can build something doesn’t mean we should build something.

Much like in the Cold War, advanced militaries in the AI-first era are already using AI in nefarious ways. We’re in another kind of Cold War where technological advances will mean eventual military advantage. A good question at this point might be: what is the Tsar Bomba of AI? When do we stop pursuing further and say: “Nope, that’s enough.”

Sarah Jane Hicks

Fortifying smart contract defenses in-house, prior to audit

5 年

I wonder what'll be the next futuristic invention to cause such a philosophical dilemma - if the robots don't kill us first, that is.

回复
Saumyabrata Das

Business Transformer

6 年

It's like having a piece of iron and their is a choice of shaping it into a plough or a gun. The choice itself is the biggest human dilemma. And I believe their will always be requirement of more ploughs ( for food ) then guns.

回复
Vladimir Yakovlev, CISSP

Published Author | CISO | CTO | Cybersecurity and Infrastructure Solutions Architect

6 年

Thank you for posting this. Dissent, unfortunately is not a deterrent. We can object strenuously to militarization or to Big-Brotherization of AI to no avail, it will absolutely happen. Unlike the cases of nuclear weapons where the results were apparent and indisputable, utilization of AI may be so targeted and specific as to prevent attribution. Conventions of nonproliferation are great, but those did not prevent India, Pakistan and NK from developing their own nuclear weapons. The pace of development in this area is insane and the imperatives to "get there" and to "get further" are too great, the potentials are too dazzling and the fear of being second, never mind last is tremendous. Given our inability to secure current environments and to prevent the occasional algorithm from doing something well, unpredictable, biased or unexplained, the outcome of application of true AI (anything which decision process we cannot understand or predictably influence), to any aspect of our life is fraught with dangers that we cannot contemplate.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了