The Leading Edge: Election Deepfakes and AI Weaponization Risks
National Journal
Solutions and tools to help government affairs professionals navigate the intersection of policy, politics, and people.
By Philip Athey , Editor
Nearly two years into the current generative AI revolution, the world is at a nexus where real-world harms from the new technology are already abundant, yet the potential for even more catastrophic harm looms large.
Generative AI has already impacted the 2024 election, with everyone from candidates to foreign intelligence services using the new technology to spread misinformation.
In August, former President Donald Trump used a deepfake to falsely claim that mega-pop star Taylor Swift endorsed him in the 2024 presidential election. In September, when Swift actually endorsed Vice President Kamala Harris, she cited the deepfake as one of the reasons she threw her support behind the Democrat.
In July, Elon Musk, a Trump ally who has been promised a White House position if he wins, tweeted out an AI-manipulated video of Harris, which used an AI clone of the vice president’s voice to disparage her own candidacy.
On Tuesday, California Gov. Gavin Newsom signed an AI-election bill that will make sharing such a video illegal in the state.
The bill was part of a series of AI-related measures signed by Newsom, which outlaw election deepfakes, require labeling of AI-generated media, and give artists more protections over their digital likeness so that companies cannot use AI to recreate videos of them after their death.
But candidates and their allies have not been the only actors using AI to interfere in the upcoming presidential election.
In July, the Justice Department seized two domain names and nearly 1,000 social media accounts that the Russian government had employed to spread misinformation using AI.
领英推荐
These election problems arise as companies continue to push forward with even more advanced generative AI technologies that pose potentially greater threats.
According to OpenAI, the company’s latest model poses a “medium risk” for issues related to chemical, biological, radiological, and nuclear weapons. This is the highest risk any of OpenAI’s publicly released models have ever received. In other words, the company said there is a risk someone could use their model to learn how to build weapons of mass destruction in their basement.
OpenAI has downplayed the actual risk level and asserted that the model is necessary for its quest to develop a computer that can actually think like a human.
It is hard to accept their assertions at face value after a string of current and former insiders have claimed the company is prioritizing profits over safety. At least one such whistleblower sent a letter to the Securities and Exchange Commission in July, noting that OpenAI’s non-disclosure agreements illegally prevented employees from informing the government of potential legal violations and safety concerns.
“Given the risk associated with the advancement of AI, there is an urgent need to ensure that employees working on this technology understand that they can raise complaints or address concerns to federal regulatory or law enforcement agencies,” the letter said , as reported by the Washington Post .
While there have been some state bills attempting to increase the safety of AI, so far the most impactful one, California’s S.B. 1047, sits on Newsom’s desk awaiting a signature or veto. At the federal level, despite dozens of hearings and several different proposals , not one piece of legislation is on a pathway to passage.
PolicyView: AI is a twice-monthly intelligence report from National Journal that provides a comprehensive view of AI legislation at the state and federal levels. We track what’s gaining momentum in specific areas of the country, what industries are most likely to be affected, and which lawmakers and influencers are driving the conversation.
To learn more and request the latest report, visit policyviewresearch.com .
IT Director - COMEX member - P&L Leader of Data and Cloud Platform
1 个月Libertarian and to be right against everyone - please comment my post - https://www.dhirubhai.net/posts/olivierlehe_la-muskenchonade-australienne-elon-musk-activity-7244586513185869824-_tCX?utm_source=share&utm_medium=member_ios