In recent years, artificial intelligence (AI) has transformed numerous sectors, including political advertising. AI's ability to analyze vast amounts of data and create targeted, persuasive messages has revolutionized campaign strategies. However, this powerful tool has also raised concerns about transparency, fairness, and potential manipulation. In response, federal and state governments in the United States, as well as technology companies and international regulators, are rolling out new frameworks to address the use of AI in political advertising and combat misinformation.
Federal Regulatory Efforts
At the federal level, there is growing momentum to regulate AI in political advertising. Key legislative proposals and initiatives aimed at enhancing transparency and accountability include:
- The Honest Ads Act: This bill, initially introduced in 2017, seeks to extend transparency requirements applicable to traditional media advertisements to digital platforms, including those using AI. Advertisers would need to disclose who is paying for online political ads and maintain a public database of such ads.
- Algorithmic Accountability Act: This proposed legislation aims to ensure greater oversight of AI systems, including those used in political advertising. It mandates that companies conduct impact assessments to identify and mitigate potential biases and discriminatory effects in their AI systems.
- Federal Election Commission (FEC) Guidelines: The FEC is considering
updates to its regulations to address AI's role in political advertising. Potential changes include stricter disclosure requirements and guidelines on the permissible use of AI in generating and disseminating political content.
- Federal Communications Commission (FCC) Proposed Rulemaking: On May 22, 2024, FCC Chairwoman Jessica Rosenworcel announced
a draft notice of proposed rulemaking (NPRM) regarding the use of AI in political ads for TV and radio. The NPRM reportedly seeks public comment on whether broadcasters and programming entities should be required to inform consumers when AI tools are used to generate political ads. If adopted, the NPRM would initiate a period of public consultation, allowing stakeholders to provide input on how such disclosures should be implemented and enforced. Chairwoman Rosenworcel’s announcement was met with immediate skepticism. In a statement
released on the same day, FCC Commissioner Brendan Carr said, “The FCC’s attempt to fundamentally alter the rules of the road for political speech just a short time before a national election is as misguided as it is unlawful.”
Many states have moved swiftly to propose legislation to address this emerging threat, recognizing the need to identify best practices for mitigating a problem that will likely escalate and evolve over time. As of March 2024, over 100 bills in 39 state legislatures contained provisions intended to regulate the potential for AI to produce election disinformation. For example:
- Wisconsin’s Mandated Disclaimers, $1000 Fine: Wisconsin’s legislature acted quickly to pass A.B. 664, addressing the need to identify AI-generated material. Introduced in November 2023, it received final approval from both chambers and was signed by Governor Tony Evers on March 21, 2024. The law requires certain political campaign-affiliated entities to add a disclaimer noting the use of generative AI for any covered content they release, with non-compliance punishable by a $1,000 fine per violation. However, it does not address AI-generated content from non-campaign-affiliated entities.
- Florida’s Mandated Disclaimers, Criminal Misdemeanors: Governor Ron DeSantis signed H.B. 919 into law on April 26, 2024. Unlike Wisconsin’s bill, H.B. 919 underwent significant redrafting and amending. The bill mandates specific disclaimers for AI-generated products of a certain size and/or length, with failure to comply resulting in a criminal misdemeanor punishable by up to one year of incarceration.
- Arizona’s Limited Mandated Disclaimers, Civil Cause of Action, Criminal Felonies, First Amendment Exceptions: Arizona is working on S.B. 1359 and H.B. 2394, both of which have passed their chamber of origin with bipartisan support. These bills specifically address digital impersonation of a candidate or elected official through synthetic media, with S.B. 1359’s disclaimer requirements applying during the 90 days preceding an election. Both bills provide more extensive remedial schemes than the Wisconsin and Florida legislation. S.B. 1359 imposes criminal penalties, raised to a felony for repeat offenses or those committed with intent to cause violence or harm. H.B. 2394 creates a civil cause of action allowing aggrieved parties to seek an injunction and monetary damages.
Social Media Platforms' and Technology Companies' Efforts to Control the Spread of Misinformation
Social media platforms and technology companies can play a crucial role in controlling the spread of misinformation, including AI-generated deepfakes. While some platforms have banned political advertisements altogether, others are specifically addressing deepfakes:
- Tech Accord to Combat Deceptive Use of AI in Elections: Companies like Microsoft, Meta, Google, and Amazon have announced initiatives to prevent the spread of AI-generated synthetic media that could deceive voters. These policies aim to prevent the dissemination of videos, audios, and images that fake or alter the likeness of political candidates and election officials.
- Meta's Policy: Meta requires political advertisers to disclose when they use altered or digitally created media. This policy helps increase transparency and allows consumers to identify synthetic content.
- Google's Policy: Google has updated its political advertising policies to require disclosures when AI-generated content is used. Politicians must include a label in all their ads containing synthetic content.
Global Regulation of Artificial Intelligence
The issue of AI in politics is a global concern. Different countries are taking varied approaches to regulate this emerging technology, including:
- China: China has implemented regulations addressing AI and its use in political contexts.
- India and the European Union: These regions are grappling with how to regulate AI technology that is advancing faster than the legislative process can keep up with.
Ongoing Challenges and Considerations and the Path Forward
While these regulatory efforts represent significant steps forward, several challenges remain:
- Balancing Innovation and Regulation: Policymakers must strike a delicate balance between fostering innovation in AI technologies and ensuring ethical and transparent use in political contexts.
- Enforcement and Compliance: Ensuring compliance with new regulations can be difficult, especially given the rapid pace of technological advancements and the global nature of digital advertising platforms.
- Public Awareness: Educating the public about AI's role in political advertising is crucial. Voters need to understand how AI technologies can influence their perceptions and decisions.
The path forward involves continuous adaptation of regulatory frameworks to address the complexities of AI technology while safeguarding democratic processes.
The regulation of AI in political advertising is an evolving field, with federal and state governments actively working to create frameworks that address the complexities of this technology. As these regulatory efforts take shape, they will play a crucial role in safeguarding the integrity of democratic processes while enabling the beneficial uses of AI in political campaigning.
For campaigners, advertisers, and technologists, staying informed about these regulatory developments is essential. By understanding and adhering to new rules, they can contribute to a political advertising landscape that is transparent, fair, and ethical.
The information contained in this document is provided for informational purposes only and should not be construed as legal advice on any matter. The material may not reflect the most current legal developments and the content and interpretation of the law addressed herein is subject to revision. The transmission and receipt of this document, in whole or in part, does not constitute or create a lawyer-client relationship between Vantage Legal PLLC and any recipient. Do not act or refrain from acting upon this information without seeking professional legal counsel. We disclaim all liability for actions taken or not taken based on any or all the contents of this document to the fullest extent permitted by law. If you have questions about any of the information contained in the document, you should contact us so that we can review the facts associated with your unique situation.
Managing Partner, TraverseLegal.com | Founder, TraverseGC.com | The Last Lawyer You’ll Ever Have to Hire
5 个月Great overview and insights here. Thank you and well done.