The AI Vote: Artificial Intelligence Influences the Closest Presidential Race in Decades

The AI Vote: Artificial Intelligence Influences the Closest Presidential Race in Decades

As AI reshapes the political landscape, experts explore its impact on misinformation and election integrity.

By Clare Hill


Experts have called the 2024 campaign for president of the United States “the closest presidential election in decades, if not more than a century.” The race is likely to come down to the results of just a few battleground states. With Donald Trump and Kamala Harris promising very different versions of America’s future, it is essential that the election results reflect the true will of the American people. Does the new age of artificial intelligence put that at risk?

Both misinformation and disinformation pose threats to the representative nature of American democratic elections. Misinformation is simply inaccurate information, while disinformation is “false information purposely spread to influence public opinion or obscure the truth.” The spread of disinformation involves a bad actor’s intent to hide the truth. Now, with the capabilities of generative AI, the danger of disinformation targeted at the American public has increased exponentially. An inaccurate image that used to take multiple people days to create is now available for an individual actor almost immediately with AI image generation technology. It will now take less organization and premeditation to skew public opinion than ever before, making malicious individuals even more powerful, and therefore more dangerous, than in the past.?

Regardless of whether or not misinformation is present, the perception that AI is accelerating misinformation is a danger by itself to the democratic process. In a recent survey of Americans, 43 percent of respondents believed that “artificial intelligence (AI) use will make it much or somewhat more difficult to find information about the election.” Democracy relies on an educated public, so Americans' distrust of political information puts democratic processes in danger.?

"Democracy relies on an educated public, so Americans' distrust of political information puts democratic processes in danger."

This is not an unfounded suspicion. American voters have sufficient reason to believe that they are seeing falsehoods due to AI – the technology has already been used to misrepresent powerful people as the political cycle nears the November election. Candidates in both parties have been impersonated in various ways: Ron DeSantis’ campaign shared AI-generated photos of Donald Trump hugging Anthony Fauci and a robocall used Joe Biden’s voice to tell voters to stay away from the polls. Pop superstar Taylor Swift even got involved, saying in her endorsement of Kamala Harris, “Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation.” Swift’s sense of trepidation about the possibility of AI-generated misinformation represents the hesitation of many Americans.

Experts in cybersecurity and social media are expressing concern that existing problems in the world of mis- and dis-information will be accelerated. Cait Conley, a senior advisor to the director of the Cybersecurity and Infrastructure Security Agency, argued that “generative AI is not going to fundamentally introduce new threats to this election cycle.” Instead, according to Conley, the technology will simply intensify security problems that we’ve seen in previous election cycles which national security agencies have already planned to defend against.?

Disinformation, one of these risks, poses a tangible threat to everyday Americans who are now vulnerable to deception by online information sources. An official from the Office of the Director of National Intelligence declared, "The American public should know that content that they read online — especially on social media — could be foreign propaganda, even if it appears to be coming from fellow Americans or originating in the United States.” We have seen such examples of manipulated information already in the United States, even before generative AI was in play. Take, for example, the continued Russian attempts to interfere in American politics. In 2016, actors paid by the Kremlin posted politically divisive ads on Facebook. These ads ranged from “Buff Bernie,” which was a “Bernie Sanders superhero promoting gay rights,” to Satan arm-wrestling Jesus on Hilary Clinton’s behalf, to an image of Donald Trump dressed as Santa to declare Americans’ right to say “Merry Christmas.” Despite being called out on its interference by US officials, Russia continued to intervene in American politics. In the 2020 presidential election, there was another coordinated Russian attack on American unity in which individuals linked to the Russian state created polarizing accounts and content on Facebook with the intent to sow political discontent in large proportions in America.?

This current election is already facing proven mass disinformation campaigns from foreign actors, including Russia. Now, widespread access to AI has increased the ease with which such actors can create disinformation. As early as March of 2024, “Russian state media and online accounts tied to the Kremlin have spread and amplified misleading and incendiary content about U.S. immigration and border security.” While America’s southern border may seem a random issue for Russian governmental actors to care about, this disinformation campaign likely serves Russian interests in the sense that it allows the Kremlin to shift the makeup of U.S. politics to be more sympathetic to its own cross-border conflict with Ukraine, increasing calls to cut military aid to their adversary.?

Russia is not the only foreign actor to already be proven trying to interfere in the U.S. election. A Chinese-run disinformation group called “Spamouflage” by intelligence experts created accounts of fake Americans who uploaded inflammatory posts on social media platforms. Many aspects of these accounts, such as their profile pictures or the content itself, could easily be made or manipulated using AI. Meta, Facebook’s parent company, ultimately stepped in to remove such accounts because so many existed on their platform. Such campaigns from Chinese actors have been on the U.S. government’s radar for some time now – in 2023, “The Global Engagement Center, a State Department agency … warned that Beijing’s information campaign could eventually sway how decisions are made around the world and undermine U.S. interests.” We are already starting to see these anxieties come to fruition during the current election cycle – when foreign actors influence American political discourse, the true interests of the American people are overshadowed.?

"When foreign actors influence American political discourse, the true interests of the American people are overshadowed."

Disinformation isn’t just a danger for the United States. With simultaneous elections happening worldwide, political disinformation proves to be an enduring challenge everywhere. Elected officials in at least 64 countries and the European Union will have faced a vote by the end of this year. Those officials represent around 49% of the world's people. These election results stand to change both individual lives and the international system significantly. AI has already demonstrated its potential to play a significant role in these elections. Take Bangladesh, for example. The South Asian country of 171 million people held an election in January of this year. The incumbent prime minister, Sheikh Hasina, won the election in an unsurprising victory – this will be her “fourth straight term and fifth overall in power” in a run that has been characterized by suppression of opposition voices. Rumeen Farhana is a prominent figure in the opposition Bangladesh Nationalist Party (BNP), which opposes Hasina’s administration. Farhana had previously spoken out against Hasina’s suppression of free speech in Bangladesh. An unknown stakeholder in the election used generative AI to diminish Farhana’s credibility; Bangladesh is a conservative and Muslim-majority country, so a viral video that falsely claimed to show Farhana in a bikini created a serious political scandal. This is just one case of the worldwide phenomenon of AI-generated political disinformation.?

Some analysts hold onto the ray of hope that even with concerted efforts to affect elections, perhaps social media platforms are not as influential or easy to manipulate effectively as some might think. Simon, McBride, and Altay of MIT Technology Review suggest that even if malicious actors are attempting to sway discord pertaining to the U.S. presidential election, “these efforts have not been fruitful.” Mass persuasion is hard, and AI has not made it easy enough to work, according to this argument.?

"Mass persuasion is hard, and AI has not made it easy enough to work."

While we cannot yet know the full extent of AI’s impact in the 2024 U.S. presidential election, we can think proactively about mitigating the risks. What can we do to combat the threat that generative AI poses to democratic elections? One option is formal governmental regulation. There are a few small ways that American federal regulators have shown their preparation to limit AI-generated misinformation and disinformation during this election: “The Federal Election Commission, the agency primarily responsible for ensuring the integrity of federal elections, issued an ‘interpretive rule’ effectively confirming that a decades-old law prohibiting ‘fraudulent misrepresentation of campaign authority’ — including falsely claiming to speak for or on behalf of a candidate in a way that is ‘damaging’ to them — applies whether or not a bad actor uses AI.” The Federal Communications Commission has also taken action, requiring broadcasters who utilize AI to disclose that fact to their viewers. While further federal regulations through legislation itself would be helpful in mitigating the dangers of AI-generated misinformation, that process is predictably taking a long time. Thus, AI-related election regulation has fallen to the states, which have in fact taken action. According to the Brennan Center for Justice, “from January 1 to July 31, 2024, 14 states have enacted new laws or provisions to regulate the use of deepfakes in political communications.” Deepfakes are images and videos created using artificial intelligence that falsely represent their subject. Going forward, such state-level regulations on deepfakes and other forms of AI usage will likely prove to be the way that we combat election misinformation.?~



***all imagery created using Image Creator from Designer***


The New AI Project | University of Notre Dame

Editor: Graham Wolfe

Advisor: John Behrens

Hrijul Dey

AI Engineer| LLM Specialist| Python Developer|Tech Blogger

3 个月

AI's influence on elections isn't just about votes, it's about trust. Deepfakes and targeted ads are rewriting our political landscape. Let's dive into how AI could sway https://www.artificialintelligenceupdate.com/us_elections_ai_impact_on_voter_behavior_and_trust/riju/ #learnmore #AI&U

回复
Hrijul Dey

AI Engineer| LLM Specialist| Python Developer|Tech Blogger

3 个月

Game changer or chaos? The rising influence of AI in U.S. elections has us questioning our reality, and our trust. Deepfakes aren't just for Hollywood anymore – they're at our doorstep. Let's dive into how this tech could shape 2024 https://www.artificialintelligenceupdate.com/us_elections_ai_impact_on_voter_behavior_and_trust/riju/ #learnmore #AI&U

回复
Hrijul Dey

AI Engineer| LLM Specialist| Python Developer|Tech Blogger

3 个月

Unveiling the Elephant in the Room: How AI is Transforming US Elections! Deepfakes and targeted ads are shifting voter behavior, raising crucial questions about trust. Let's dive into this before it's too late. https://www.artificialintelligenceupdate.com/us_elections_ai_impact_on_voter_behavior_and_trust/riju/ #learnmore #AI&U

回复
Hrijul Dey

AI Engineer| LLM Specialist| Python Developer|Tech Blogger

4 个月

Transformative tech or Trojan horse? The AI revolution in US elections is here. Deepfakes threatening transparency, targeted ads manipulating voter behavior. Let's dive into how AI might sway '24 elections https://www.artificialintelligenceupdate.com/us_elections_ai_impact_on_voter_behavior_and_trust/riju/ #learnmore #AI&U " #AI #Elections2024 #VoterTrust

回复
Mark B.

Meow Activated Door Opener

4 个月

No, AI is not influencing the election. People using such augmented tools are influencing the election.

要查看或添加评论,请登录

The New AI Project | University of Notre Dame的更多文章