To Properly Govern Artificial Intelligence, Look to Our Nuclear Failures | Opinion
SID MOHASSEB
Entrepreneur Philosopher: Founder | Investor | Innovator | University Professor | Author > & Problem Finder!
Artificial Intelligence's unpredictable nature is being integrated into our lives without adequate caution or acknowledgment of potentially catastrophic outcomes. Most often, the world does not realize the gravity of innovations before a disaster, and AI is no different. Nuclear technology should be a reference point for why global oversight is crucial. Humanity can't implement strong enough guardrails to prevent cybercrime, manipulation of financial markets, or the development of lethal autonomous weapons.
As Russian President?Vladimir Putin ?said in 2017, the leader of AI will rule all of mankind and a monopoly over this power should be avoided. Therefore, international powers urgently need to sign an AI Non-Proliferation Treaty to ensure all innovations and uses are safe, democratized, and incapable of destruction. It must be a treaty that strongly penalizes misuse and rewards positive innovation.
In the 1940s, the US was steadily developing the first atomic bomb for military application. Four years after this bomb caused mass destruction in Hiroshima and Nagasaki, the Soviet Union was ready to test its own. This unexpected spreading of top-secret information later spurred the International Atomic Energy Agency (IAEA) to intervene. Through this organization, a Non-Proliferation Treaty (NPT) was signed in 1968 to monitor nuclear weapon acquisition and promote peaceful use worldwide.
Potentially, if we had signed the NPT in 1938 rather than 1968, we could have prevented dangerous applications of nuclear technology, averted the existential threat of the Cold War, and reaped the benefits of safe nuclear energy. This monumental failure killed many and soiled the bright future of a new technology. AI is even more promising than nuclear inventions, so we must take action to protect the world and ourselves from inevitable annihilation.
Perhaps the most striking difference between nuclear power and AI is accessibility. While chemical compounds used to create nuclear weapons are mostly inaccessible, AI is abundantly available through existing platforms or open-source software. Across the globe, competent computer programmers have the main ingredients: intelligence and the ability to code.
Unlike uranium, the creation of artificial intelligence can never be fully contained. Therefore, any proposed Non-Proliferation Treaty must include all important parties. The military, rogue individuals, foreign participants, and businesses should all be included. Constructing this regulation is complicated, but if global powers act quickly, they can minimize future destruction.
AI is still in its infancy and the potential for significant advancements or catastrophic consequences largely depends on the framework established now; because preventative measures are much more effective than crisis management. Noah didn't build the Ark when it was raining. We are at the genesis of AI and must be aware and ready for the storms ahead. We must avoid creating a treaty during times of stress because we will gloss over our Achilles heel.
领英推荐
Since its?conception in the 1950s , we have envisioned the potential damage that artificial intelligence could cause. However, as time has passed, we have grown weary of contemplating this future reality. We want to ride the wave of excitement that comes with technological advancements instead of prioritizing accountability and impeding innovation. I wholeheartedly agree that innovation should be promoted through technology and creativity shouldn't be limited, but we must have a framework for identifying harmful AI software. We have the Food and Drug Administration, Federal Trade Commission, and Federal Communications Commission for a reason: To protect people from themselves and the things they create. So why are we giving AI special treatment?
At various levels, the world is aware of the risks. The World Economic Forum explained the danger of non-state actors designing lethal autonomous weapons. By adapting commercial drones with open-source AI, attackers can create Unmanned Autonomous Vehicles (UAVs). UAVs can have face and gait recognition that helps them target specific individuals and in large numbers, they could kill thousands. The Center for Artificial Intelligence and Digital Policy lodged a complaint with the FTC to halt new OpenAI releases and an open letter petitioning for a pause on AI experiments has been?signed by Elon Musk and others .?Google ?CEO Sundar Pichai is calling for AI regulations similar to nuclear treaties and Dr. Geoffrey Hinton, the 'Godfather of AI', just quit his position at Google because he regrets accelerating the uncontrollable abilities of these systems.
Some slow-burning activities are also in the works. The United States government established the National Security Commission on Artificial Intelligence in 2018 to inform the president and?Congress ?of the dangers of artificial intelligence. But these attempts at oversight are ineffective compared to the rate of technological advancements. Some countries are banning ChatGPT and developing new legislation, but others are confused about where to start. To protect humanity in every country, we should make a unified effort to be part of a Non-Proliferation Treaty.
AI is here to stay and present in every facet of our lives; from social media to news, from factories to medicine, and from banking to education, which makes it difficult to control or oversee use. The?European Union 's Artificial Intelligence Act is modeling how we can approach the problem of control and offering some initial ideas for an AI NTP. Imposing severe penalties for misuse is one, but the Act should be updated to include crystal-clear explanations for what should be criminally prosecuted to avoid a free-for-all lawsuit heaven for attorneys. The Act should also be amended to creatively distance itself from a privacy protection-centric act.
The Biden administration is currently developing accountability measures for AI and would significantly benefit from examining the strengths and weaknesses of the European Union as well as the Chinese-offered regulations.
AI is innovating every sector it touches, but it could bring an avalanche of disasters without a Non-Proliferation Treaty. We must establish a common foundation for something that could bring unimaginable innovation and transformation. Getting Russia and North Korea to participate may be impossible today, but the Chinese and Indians (the two upcoming world superpowers) must be included. AI was created to drive efficiency and expand human capabilities, but unregulated inventions will be a source of disaster and pull the world apart. With the help of a Non-Proliferation Treaty, we could make the right decision with AI. We fumbled the handling of nuclear technology post-Manhattan Project, but we can prevent AI from triggering another season of cold and scary war.