AI’s existential risks
Susan L. Smoter
Solving Complex Problems with Innovative Technology and Helping Create a More Beautiful World
Policy experts say Congress should act before AI systems become even more advanced. This is being reported by Roll Call, quoting notables such as Senators Mitt Romney, R-Utah, Jack Reed, D-R.I., Jerry Moran, R-Kan., and Angus King, I-Maine.
These Senators proposed in April a framework that would establish federal oversight of "frontier" AI models to guard against biological, chemical, cyber and nuclear threats. The lawmakers said in a document explaining their proposal calls for a?federal agency or coordinating body that would enforce new safeguards, “which would apply to only the very largest and most advanced models. Such safeguards would be reevaluated on a recurring basis to anticipate evolving threat landscapes and technology,”
AI systems’ potential threats were highlighted by a group of scientists, tech industry executives and academics in a May 2023 open letter advising?that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The signatories included top executives from OpenAI, Microsoft Corp., Google, Anthropic among some other notable companies.
Rep. Ted Lieu, D-Calif., who holds a computer science degree and was one of the signatories of that letter, said he remains concerned about the existential risks. He and Rep. Sara Jacobs, D-Calif., sought to address one aspect?in?the fiscal 2025 defense policy bill advanced by the House Armed Services Committee last month containing a provision would require a human to be in the loop on any decision involving the launch of a nuclear weapon,?to prevent autonomous AI systems from causing World War III.
Lieu, co-chair of the bipartisan House Task Force on Artificial Intelligence admitted in a recent interview that he and his colleagues?are still trying to grasp the depths of these?perils,?such as AI spitting out instructions to build a better?chemical or a biological weapon.
I'm sharing this because the drum beats of anxiety over the dangers of advanced technologies such as AI, ML, NLP, GenAI and more shed light on how technology is developing at speeds that make it difficult for society to keep pace and adapt controls for overseeing how potentially dangerous applications of new capabilities will affect our security and well-being.
Experts studying technology and policy say that Congress and federal agencies should act before tech companies turn out AI systems with even more advanced capabilities.?
“Policymakers should begin to put in place today a regulatory framework to prepare for this future,” when highly capable systems are widely available around the world, Paul Scharre, executive vice president at the Center for a New American Security, wrote in a recent report. “Building an anticipatory regulatory framework is essential because of the disconnect in speeds between AI progress and the policymaking process, the difficulty in predicting the capabilities of new AI systems for specific tasks, and the speed with which AI models proliferate today, absent regulation.
领英推荐
“Waiting to regulate frontier AI systems until concrete harms materialize will almost certainly result in regulation being too late,” said Scharre, a former Pentagon official who helped prepare the Defense Department’s policies on the use of autonomous weapons systems.?
One reason the risks may be downplayed is that some in the tech industry say?fears of existential risks from AI are overblown.?
IBM, for example, has urged lawmakers to stay away from licensing and federal oversight for advanced AI systems.
Chris Padilla, IBM’s vice president for government and regulatory affairs, last week recounted for reporters the stance of Chief Privacy and Trust Officer?Christina Montgomery, who?told participants at a Schumer?briefing that she didn’t think?AI is an existential risk to humanity and that?the U.S. doesn’t need a?government licensing regime.
Instead of licensing and regulatory oversight of AI models, the government should hold developers and users of AI systems legally liable for harms they cause, Padilla said. “The main way that our CEO suggested this happen is through legal liability, basically, through the courts,” he said.?
This is a complex issue for sure and how to deal with potential risks introduced by emerging technologies such as AI need to be studied. It's clear to me that either the legislative path or the legal remedy are going to be much too slow to deal with the reality of any of the possible risks. Keep the human in the loop is the most obvious requirement and yet, the age old problem remains - how do we ensure that human is trustworthy. Seems it is easier to set guardrails on technology.
I'd love to hear your thoughts!
Cloud & Security Architect | Writer | MCT | Founder | CTO
5 个月Susan L. Smoter Great article. I do agree, moving forward without following an ethical construct places AI in a unique place to do more harm than good.