Senate Rules Committee Advances Bills to Address Harmful AI in Elections
Center for Democracy and Technology Logo

Senate Rules Committee Advances Bills to Address Harmful AI in Elections

By: Tim Harper & Becca Branum

On May 15, 2024, the U.S. Senate Committee on Rules & Administration marked up and advanced three bills addressing the role of AI in elections: the Preparing Election Administrators for AI Act, the AI Transparency in Elections Act, and the Protect Elections from Deceptive AI Act. The first two of these bills would take limited but helpful steps to address some of the risks AI may pose to elections, though they would benefit from certain improvements. The third bill, while aimed at a genuine concern, raises more significant constitutional and implementation concerns.

Threat of Deepfakes to the 2024 Election

The trio of bills represent congressional efforts to regulate AI’s influence in elections, which Senate Majority Leader Schumer said “has the potential to jaundice or even totally discredit our election systems.”?

Senator Schumer’s remarks come amid a wave of calls decrying the risks that generative AI poses to the integrity of elections this year. The technology has the potential to supercharge attempts to interfere in elections and spread mis- and disinformation, as the FBI and the Cybersecurity and Information Security Agency have both recently warned.

We have already seen some examples of generative AI being deployed in political campaigns to create deepfakes–synthetic image, audio, or video content that falsely portrays an individual. The DeSantis campaign made realistic fake images of Trump hugging Dr. Fauci last year, and a deepfake robocall of President Biden discouraged voting in the New Hampshire primary this winter.?

And the threat to the information environment isn’t just deepfakes. For example, bad actors could use the technology to scale their disinformation campaigns to flood the internet with false information in the waning days of the race, develop hyper-localized disinformation to convince voters that the polling station down the block has been temporarily closed on Election Day, or translate deceptive messages into non-English languages to target language minority groups in key swing states.?

Generative AI poses additional challenges for election officials. AI capabilities could be used to generate convincing fake election records. Phishing campaigns can be more personalized and persuasive. Gen AI can produce content for- and automate submission of- FOIA requests that can overwhelm an elections office. And election officials’ voices could be cloned to send inauthentic communications about election results, or to direct staff to give bad actors access to their systems. All of this can be done more affordably, easily, and at a larger scale than before these tools were widely available.

But while generative AI could boost these information and cybersecurity threats, most of these worst case scenarios are still hypothetical — researchers haven’t uncovered many real world examples of these malicious tactics so far, particularly in the United States. Thus, the extent to which the threats from AI will manifest remains unclear. At the same time, failing to take steps to mitigate the potential threats may leave us vulnerable in the last days leading to the November election, when it will largely be too late to act.

Preparing Election Administrators for AI Act

The first of the three bills, which has the widest bipartisan support, is the Preparing Election Administrators for AI Act. The bill would require the US Election Assistance Commission (EAC) to develop voluntary guidelines for elections administrators at the state and local level relating to the use and risks of AI in elections within 60 days of enactment. Specifically, the guidelines would address?

(1) the risks and benefits associated with using artificial intelligence technologies to conduct election administration activities; (2) the cybersecurity risks of artificial intelligence technologies to election administration; (3) how information generated and distributed by artificial intelligence technologies can affect the sharing of accurate election information and how election offices should respond; and (4) how information generated and distributed by artificial intelligence technologies can affect the spreading of election disinformation that undermines public trust and confidence in elections.?

After the election, the legislation would also require the EAC to release a report on the use of AI in 2024 elections, including how information generated by AI was shared and how AI was used by election offices.

Overall, this legislation would be beneficial for state and local election administrators. Throughout this year, CDT has heard from many election officials that are interested in the opportunities of — and concerned with the cybersecurity and information integrity risks posed by — generative AI. Voluntary guidance on these issues may be helpful for many of these administrators. However, the proximity of the election will temper their effectiveness. Election officials have an overwhelming number of obligations and procedures in the lead up to the general election, and staff time and focus is in short supply. To really begin to address these risks, additional appropriations for election security grants to states and localities will be needed.

Additionally, the decision to make the EAC the primary implementing agency is somewhat surprising. While it is solidly within the EAC’s remit to provide voluntary guidelines to election officials — see the Voluntary Voting Systems Guidelines (VVSG) — the agency is not known for its focus on cybersecurity and information integrity issues. That distinction is held by the Cybersecurity and Infrastructure Security Agency (CISA). We would recommend that the bill authors consider ways to have CISA, NIST, and the EAC coordinate in drafting these guidelines.

Finally, this legislation does not specify what the EAC or NIST should consider as a definition of artificial intelligence. Without doing so, the legislation may require the EAC to enumerate many of the more mundane uses of automation and algorithms that have long been considered standard in the elections space — such as using machines to detect filled in bubbles on ballots. This legislation may benefit from refining its focus to the use of generative AI in elections and the risks it poses for elections.

AI Transparency in Elections Act

The Committee also marked up the AI Transparency in Elections Act, which creates labeling standards for political advertising that includes media generated by artificial intelligence. The bill would require a person financing public political advertising containing an image, audio, or video that was substantially generated by artificial intelligence to ensure that the advertisement includes a clear and conspicuous statement regarding the advertisement’s use of AI.?

Only advertisements that expressly advocate for or against the nomination or election of a candidate, or that solicit a contribution for certain political groups (e.g. candidates, political committees, etc.,) would be required to label their content at all times. Advertisements that merely refer to a candidate, including those that include the voice or likeness of a candidate (which would include ads containing a deepfake), would require a label during a window beginning 120 days prior to Election Day and concluding after Election Day. This enforcement period is designed as a limitation to support arguments that the requirement is narrowly tailored enough to be constitutional. Existing definitions for coordinated communications and electioneering communications also contain similar enforcement periods spanning from 30 days to several months, all ending on Election Day.?

The government’s interest in requiring this content to be labeled, however, extends past Election Day, as elections aren’t final on the last day of voting. Recounts, election contests, and post election processes like the certification of the vote are vulnerable to AI-generated disinformation or deepfakes, and candidates could write ads during these important periods to mobilize their supporters, as we saw in the lead up to January 6th, 2021. As currently written, ads about these issues that run after Election Day but before the election is finalized would only require a label if they contain express advocacy or solicit contributions (e.g. “support President Trump as he fights to stop the election from being rigged”).?

No label would be required for uses of AI that are minor or cosmetic, such as cropping, resizing, or other immaterial uses, or other uses that do not create a fundamentally different understanding than a reasonable person would have from an unaltered version of the media. While this requirement likely ensures that parody and satire are excluded from these requirements, adding an explicit exception for this constitutionally protected speech would improve this legislation.? The legislation’s labeling requirements, similar to labels required in certain circumstances for other forms of political advertising, creates a basic transparency requirement to help the public understand when AI is used during elections. The legislation requires that the label be affixed by the person financing the advertising and that the label be permanently affixed to a covered communication, the latter of which raises questions about the feasibility of such permanent markings in the context of AI images and videos.??

Protect Elections from Deceptive AI Act

The most expansive of the three pieces of legislation is the Protect Elections from Deceptive AI Act, which creates a new federal cause of action for federal candidates whose voice or likeness appears in, or who are the subject of, materially deceptive AI-generated audio or visual media. The bill seeks to achieve the laudable goal of protecting public discourse from some of the most harmful pieces of political disinformation, but it also sweeps too broadly, raising constitutional, policy, and implementation questions that must be addressed.

Amidst a trend of novel defamation litigation that intends to address election and other misinformation levied in the public arena, this legislation would prohibit the knowing distribution of materially deceptive AI-generated audio or visual media of a federal candidate by any person, political committee, or other entity for the purpose of soliciting funds or influencing an election. The legislation, which is silent on what constitutes materiality, defines “deceptive AI-generated audio or visual media” to include an image, audio, or video that is the product of AI technology that uses machine learning (i) that merges, combines, replaces, or superimposes content that appears authentic or generates an inauthentic piece of media, and (ii) that a reasonable person either would have a fundamentally different understanding of the media as compared to an unaltered version or would believe that the media accurately depicts the appearance, speech, or expressive conduct not actually exhibited by the candidate.

To enforce this prohibition, the legislation creates a new federal cause of action for any federal candidate whose voice or likeness appears in, or who is the subject of, materially deceptive AI-generated content distributed in violation of the act. Any actions brought under the act would be given scheduling precedence under the Federal Rules of Civil Procedure, and federal candidates would be entitled to injunctive or other equitable relief prohibiting further distribution, damages against those who distributed the content at issue, and attorney fees, following clear and convincing evidence that a violation had occurred.?

The bill would empower federal candidates to address some of the worst threats of deepfake misinformation that would almost certainly meet the definition of defamation and, therefore, be outside the realm of First Amendment-protected speech. But the bill would also empower the censorship of political speech, not just by candidates and campaigns but also by ordinary people. The Supreme Court has repeatedly and appropriately affirmed that political speech occupies the “highest rung on the hierarchy of First Amendment values,” recognizing that unfettered speech about public matters is a core democratic value. Moreover, falsehoods that fall short of defamation, perjury, or fraud are generally protected by the First Amendment, including those generated by artificial intelligence. The bill’s broad sweep into political speech even by ordinary citizens, its focus on public figures who must meet the higher “actual malice” standard to succeed on a defamation claim, and its application to the distribution (rather than creation) of relevant content without any showing of harm raise several constitutional and policy questions.?

Consider, for example, a realistic, AI-generated video of former President Trump verbalizing a written post that he made to Truth Social criticizing President Biden and his policies. If the Biden campaign distributed the above video as part of a fundraising plea, the bill would give rise to a cause of action requiring a clear and convincing demonstration that the content was materially deceptive and that a reasonable person viewing the content would believe it accurately exhibited the appearance or actions of a federal candidate (here, former President Trump). Thus, a suit by the former President against the Biden campaign likely would turn on the “materiality” of the deception inherent to the video, despite it conveying words actually written by former President Trump that do not defame or otherwise harm him.?

If a court found the video “materially” deceptive, the legislation could empower the censorship of non-defamatory, First Amendment-protected political speech. Even if a court did not ultimately interpret materiality so broadly, the ambiguity may incentivize some candidates to try to silence similar speech they dislike. The former President could also sue a private citizen who posted the video on social media knowing the video falsely depicted the former President, as well as the social media provider itself if it had the requisite knowledge and a court interpreted the legislation as overriding Section 230, even though such an interpretation would be against the weight of current precedent.?

The bill also would empower federal candidates to sue to take down content even if they are merely the “subject of” materially-deceptive AI, even if their voice or likeness does not appear in the content. If interpreted broadly to only require mention of a candidate, the legislation seemingly could also empower President Biden to seek injunctive relief to prohibit distribution of the same video, merely because he is the “subject of” a potentially deceptive piece of AI content, again without a showing of harm. Even without such an expansive interpretation, however, permitting lawsuits from candidates who are merely the “subject of” materially deceptive AI against anyone who might distribute it, including everyday social media users, could have the effect of chilling private people’s political speech.

In neither of these cases would either President Biden or former President Trump have been defamed by any standard. The bill could nevertheless permit either candidate to silence First Amendment-protected speech that is AI-generated and deceptive but not necessarily harmful.?

As currently drafted, the legislation’s ambiguity would encourage strategic litigation to silence participation in public debate, potentially silencing not just federal candidates but also critics, journalists, and regular citizens simply participating in public discourse. A well-financed federal candidate, irked by First Amendment-protected criticism, could use ambiguities regarding materiality, whether a candidate was truly the “subject of” a depiction, and the legislation’s potentially unclear interaction with Section 230 to sue or threaten to sue a private citizen or platform. Even if the claim was without merit, the threat of a lawsuit would be enough to cause many people to self-silence and to cause litigation-wary platforms to censor the speech, threatening everyone’s ability to participate in public discourse. Modifications to the bill, such as requiring a showing of defamation prior to silencing speech would lessen strategic litigation incentives to ensure that the bill is only directed at constitutionally proscribable content.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了