Elections, Information, and Trust in the Age of AI
Observations of a former political campaign strategist
?
The intersection of elections, information, and trust has always been a delicate dance of democracy, but in the age of artificial intelligence (AI), the rhythm has changed. AI has the power to revolutionize how we access and process the vast oceans of data that inform our political decisions. Yet, as much as it promises to enhance our understanding, it also poses novel challenges to the trust we place in the information that shapes our electoral choices.
We’re just ten months away from the 2024 US Presidential Election, the first major US election in this new era of AI and a litmus test for our democracy.
There will be relatively innocuous uses of the technology: first drafts of candidate speeches, online sentiment analysis, conversational robocalls, and AI-generated summaries to make sense of where candidates stand on key issues. Yet, we’re also in for a ‘tsunami of misinformation,’ in the words of Oren Etzioni, the founding CEO of the Allen Institute for Artificial Intelligence. This technology will be – and has already been – used to generate deepfakes designed to fabricate political scandals, create false echo chambers promoting dangerous narratives, and manipulate voter perceptions on a scale never seen before. And the content will come from everywhere – political candidates taking cheap shots at their opponents, foreign agents creating armies of fake accounts, and corporations and lobbyists flooding your social feeds to push a specific agenda or ballot measure.
Of course, mis/disinformation is nothing new. I’ve been in the ‘war room’ and led digital strategy, social media, and media buying for three successful legislative campaigns and a hotly contested gubernatorial race with razor-thin margins. Fighting false narratives has been an unfortunate part of each of these elections. Long before the Cambridge Analytica scandal and the more recent advances in generative AI, our political communication has been plagued with misinformation, smear attacks, and exploitative microtargeting. Yet I fear these ploys will be child’s play compared to what’s ahead of us. Misinformation has become industrial-scale, and the weaponization of AI has made it very, very difficult to separate truth from fiction.
It's not a matter of ‘if’ or even ‘when.’ The technology is here, and we’ve already seen how it can influence public opinion. It’s a matter of how we – as individuals and as a society – rise to these new challenges.
?
What we’re up against
Misinformation on steroids: AI-generated media
According to Europol, 90% of online content will be synthetically generated by 2026. Thanks to generative AI and tools like ChatGPT, we’re already seeing an explosion of AI-produced text, images, video, and audio. The barriers to becoming a content producer have disappeared almost overnight, and the synthetic outputs are incredibly convincing – with a recent Stanford University study finding that the average individual can only distinguish between AI and human 50-52% of the time. We’d have the same success flipping a coin and calling heads or tails.
What does this mean for the pending election? Now, anyone, anywhere, can create convincing content in seconds. Pictures, videos, reports, entire websites. Foreign agents, political rivals, and lobbyists alike can flood social media with any narrative they like – with a fraction of the effort it would’ve taken a couple of years ago.
Deepfakes
A particularly troubling variation of this AI content is the deepfake – an image or video that distorts reality by manipulating existing media or fabricating the likeness of real individuals, often in the act of doing seemingly incriminating or otherwise out of character. Both President Joe Biden and former President Donald Trump have already been the victims of deepfakes designed to tarnish their reputation. And this is even with the technology still in its infancy, with the World Economic Forum reporting a 900% annual increase in deepfake activity.?
Sadly, we’re already running into questions of whether photos and videos can still be admissible evidence in court, given this concerning technology. We’re past the days of seeing is believing. Now, there’s a very real danger that the next major political scandal may just be fabricated by an opponent. ?
Voice clones
Similar to a deepfake, a voice clone is a synthetic recreation of a person’s voice, which can be manipulated to say anything. All it takes is a three-second audio snippet for AI tools to clone a voice, including the speaker’s emotional tone, inflections, and acoustic environment.
We’ve already seen countless examples of this technology in action, including a seemingly damning audio clip of Slovakia’s Progressive party leader talking about rigging the election two days before the vote, the UK’s opposition leader berating a staffer, and a fabricated clip of Trump used in a television ad funded by a pro-DeSantis super PAC.
The blurring lines between AI and human
With generative AI, it’s become increasingly difficult to determine what content has been genuinely created by humans and what’s artificial. We’ve seen AI chatbots fooling people into divulging sensitive information and sending money, a dramatic rise in fake accounts (often with realistic profile photos and strong social engagement), and even large-scale bot farms pushing Russian propaganda across Ukraine.
The technology has become so convincing that it’s even fooling those who know us best, as seen in stories of a daughter’s voice clone leading a distressed mother to believe in a fake kidnapping or when parents wired $21,000 after believing they received a bail call from their son.
Even today’s identity verification technology appears to be caught off guard, with manipulated images passing Reddit verification tests and bank-grade security checks. At the same time, even the coveted "verified" status offered by many social platforms has begun to lose its meaning, with such seemingly trustworthy accounts caught circulating 74% of the propaganda and false information shared on Twitter/X during the Israel-Hamas conflict.
So, what can we do about it?
Fortunately, it’s not all doom and gloom. For all its downsides, AI has significant potential to change our lives for the better, and democracy has overcome worse threats. We just need to check what sort of power we’re giving AI and the agents of misinformation using it – and for that, we all have a role to play.
The role of individuals in combating misinformation
There are several steps citizens can take to become more informed voters:
1. Do your own research
First and foremost, take the time to learn about the candidates and issues. Browse their websites (after making sure you’re viewing the real website and not an impersonation), read up on their platforms, and know what they stand for – beyond red vs blue.
The media, social media, and AI-generated fake media will all be expressing their views on the candidates, but don’t take others’ words for truth without first taking the time to learn directly from the source.
2. Approach everything you read or watch online with a grain of salt
You’ve likely seen the infamous 1993 New Yorker cartoon turned Internet meme of a dog surfing the web with the caption ‘On the internet, nobody knows you’re a dog.’ Well, 31 years later, that message holds just as true. There’s a decent chance that a good chunk of your followers are bots and a near 100% chance that you’ve already consumed AI-generated content today, likely unknowingly. We need to take each of these online interactions with a healthy dose of skepticism.
Instead of ‘innocent until proven guilty,’ it’s ‘bot until proven human.’
3. Check your sources
Always verify the credibility of the information you come across. This means checking the authenticity of the news outlets, the background of the writers, and the quality of the sources cited. Pay attention to the language and tone used in the content; reputable sources typically avoid sensationalist or emotionally charged language, opting instead for a more measured and objective tone. Examine the overall balance of the report – credible journalism tends to present multiple viewpoints, especially on contentious issues. And be wary of content that heavily relies on anonymous sources or lacks clear evidence to back up its claims.
领英推荐
Fact-checking is easier said than done, as our social feeds are bombarded with one-sentence headlines and even fake accounts can be “verified” or have tens of thousands of followers. Still, to the extent possible, make sure content is coming from a reputable source and read the entire article, not just the (often sensational) headline before sharing.
4. Avoid (or at least acknowledge) echo chambers
Our social spheres are a lot more filtered than we might think. They say "birds of a feather flock together," and the same can be said about political ideologies. It can be easy to believe that our vote doesn’t matter or that a certain candidate is a shoo-in for office when we’re surrounded by people expressing the same views.
Be aware of the filter bubble that can occur when your friend group or news feed becomes a reflection of your own beliefs, not the reality. Make a conscious effort to consider other perspectives and challenge your preconceptions.
It’s equally possible that the echo chamber we find ourselves in has been fabricated by advertisers or other outside forces. If you only see ads for a particular candidate, it’s not that they’re running unopposed – it’s that you’re being microtargeted with highly personalized messages designed to influence how you vote (see tip #6).
5. Seek out a variety of news sources
Echo chambers amplify our existing views by continuously presenting content that aligns with our beliefs, inadvertently reinforcing familiarity bias — the tendency to favor information that conforms to what we already know. To counter this, it's vital to actively seek out diverse sources and viewpoints, particularly those that challenge our preconceptions.
This can involve subscribing to news outlets with varying editorial stances, engaging in conversations with individuals from different backgrounds, or participating in forums that foster diverse discussions. By doing so, we not only break free from the confines of our echo chambers but also enhance our critical thinking skills and cultivate a more nuanced and informed perspective on the world around us.
If you’re looking for a trusted nonpartisan outlet, publications like Reuters and Newsweek typically rank pretty neutral. Or, as I like to do, consider toggling between different outlets with known biases – perhaps comparing MSNBC and Fox News to better understand the arguments on either side.
6. Take control of your data
Lastly, be mindful of the digital footprint you leave online. Data is a political strategist’s best friend, and the more data you leave behind, the more leverage you give advertisers to social engineer how you vote.
This data-driven microtargeting was the exact tactic used by firms like Cambridge Analytica to micro-target and manipulate swing voters in the 2016 US Presidential Election – and it’s even more dangerous this time around with the ability for advertisers or even foreign agents to whip up highly personalized content at scale.
Use privacy-focused browsers, adjust your social media settings to limit data collection, and be cautious about sharing personal information. Many platforms, including Facebook and Twitter/X, even give you the option to opt out of interest-based ads. Your data can be used to target you with tailored misinformation, so protecting it is a form of self-defense against manipulation.
?
The role of the public sector in regulating AI
Government regulation can play a crucial role in maintaining the integrity of information and protecting public trust.
In the United States, we’ve seen a steady stream of state legislation designed to combat deepfakes, particularly in the context of elections. Several states, including Michigan, Minnesota, and Washington, require a disclosure to be put on any AI-generated media designed to influence an election. Others, like Texas and California, have criminalized political deepfakes. The ability to enforce – or even detect what’s synthetic and what’s not – remains to be seen, but it’s a step in the right direction.
On the federal level, there’s been a bipartisan move by senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN), and Thom Tillis (R-NC) to introduce the new NO FAKES Act, designed to prevent the “production of a digital replica without consent of the applicable individual or rights holder.” Two months later, House lawmakers followed suit and proposed the No AI Fraud Act, offering similar protections. Both proposals are currently in discussion.
There have even been calls for governments worldwide to issue a moratorium on AI, halting further development in its tracks. Elon Musk, Steve Wozniak, Andrew Yang, and over 1,000 tech leaders and researchers signed an open letter calling for just that, warning about the “profound risks to society and humanity” that AI presents. Such drastic action seems unlikely – especially considering the good that AI can do, particularly in healthcare, research, and education – so a middle ground is needed, and fast. This is why President Biden issued an executive order back in October to place guardrails on the development of AI and promote responsible innovation.
Meanwhile, the European Union recently agreed on the AI Act, landmark legislation designed to limit the use of AI and require manipulated images and videos to be clearly labeled as AI-generated. Non-compliance can be penalized with a fine of up to EUR 35 million; however, the law won’t be fully enforced until 2026.
Beyond legislative measures, governments can foster better public awareness and education. Initiatives can include public campaigns that inform voters about the nature and risks of deepfakes and misinformation, equipping citizens with the skills to critically assess the content they encounter. By investing in education, governments not only combat the current challenges posed by AI but also lay the groundwork for a more informed and resilient society that can navigate the complexities of a digital world where AI plays an increasingly prominent role.
?
The role of tech and the private sector in building defenses against AI
As creators and custodians of AI technologies, tech companies have both the power and the duty to ensure these systems are developed and used in ways that do not harm society or individuals.
With AI now making its way into every tech company’s product and pitch deck, it’s up to these companies to self-regulate and build in constraints around what can and can’t be done with the technology. OpenAI’s DALL-E image generator, for example, has internal guardrails that limit the technology’s ability to generate violent, hateful, or adult content, as well as its ability to generate images of politicians or other public figures. Just yesterday, OpenAI also new restrictions on the use of both ChatGPT and DALL-E for political campaigns. The company also announced plans to implement digital credentials developed by the Coalition for Content Provenance and Authenticity (C2PA) to cryptographically encode details about an image's provenance — a sort of AI watermark. (A major step in the right direction, in my view, and one that all generative AI companies should be required to follow.)
OpenAI and other companies are also putting in place ethical guidelines, internal review boards, and collaborating toward new industry standards for responsible AI – all of which can help ensure that AI is a force for good.
Yet it’s not just about how the content is generated; it’s also about how it’s shared and propagated. Online communities and social media companies must take accountability for protecting the trust and safety of their platforms. Both Google and Meta (the parent of Facebook and Instagram) now require disclosures on ads that contain “synthetic content that inauthentically depicts real or realistic-looking people or events.” Both companies also have an outright ban on true deepfakes and have heavily invested in the Sisyphean task of fighting misinformation by employing third-party fact-checkers, limiting fake accounts, and applying machine learning to detect fraud and policy violations. They also both mandate the verification of political advertisers, a process that requires government-issued ID, physical mailing address, and either an Employer Identification Number or Federal Election Commission registration number. TikTok is also stepping up its trust and safety game, fueled by the threat of heavy fines from the European Union following the platform's surge in faked videos and hate speech during the Israel-Hamas war. On paper, TikTok bans all political ads, although it seems they have an enforcement problem. A notable absence from this list is Twitter/X, which laid off most of its content moderation and election integrity teams and has since emerged as the biggest purveyor of disinformation under Elon Musk’s ownership.
We’ve also seen companies approaching a new AI-heavy future from a Web3 digital trust perspective, exploring how technologies like privacy-preserving verifiable credentials and proof of personhood protocols can empower individuals to prove what’s real and what’s AI. Companies like Gen (my employer) are even “fighting AI with AI” with innovations like Genie, designed to help individuals detect phishing attacks and other common scams, which have increased by 1200% since the release of ChatGPT.
Of course, innovation will always follow the money, which is why it’s crucial for investors to commit to responsible AI. To date, over 35 venture capital firms have signed voluntary AI commitments to promote the ethical use of AI throughout their investment portfolio.
?
Are we headed toward the first ‘deepfake election’?
The dangers of AI are very real. Never before has the average person been able to, so easily and so quickly, produce thousands of pages of original and convincing content, fabricate a political scandal, and dupe unsuspecting citizens. There’s no doubt in my mind that we’re in for an incredibly messy election, even casting aside the usual political tensions and the increasing polarization.
Yet, while AI should certainly be ringing some alarms for election officials, I still have hope for a free and fair election. But it will take all of us working together to ensure that happens – citizens maintaining a watchful eye for AI threats and misinformation echo chambers, governments putting the right guardrails in place for responsible AI, and the private sector committing to self-regulation.
Stay vigilant, my friends, and get out and vote. ???
??
Thanks to Drummond Reed and Rayson Andrade-Walz for reviewing this piece.
Account Executive at Full Throttle Falato Leads - We can safely send over 20,000 emails and 9,000 LinkedIn Inmails per month for lead generation
7 个月Alex, thanks for sharing!
International Accounting Manager
1 年Scary!
Co-founder @ AI Alchemists: Researching Use Cases For AI In Different Industries
1 年Alex Andrade-Walz Fantastic insights, Alex! Your article on the complexities of AI in political campaigns is both enlightening and a bit alarming. I particularly enjoyed the 'bot until proven human' remark – it adds a touch of humor to a serious topic. Your in-depth research into AI deepfakes and their potential impact on elections is commendable. I strongly agree with the necessity of developing tools to more accurately identify these deepfakes, ideally surpassing the current 50% accuracy rate. The idea of implementing AI signatures and urging social media companies to verify AI-generated 'political' content before it's shared could be a game-changer in combating misinformation. This kind of forward-thinking approach is essential in preserving the integrity of our political discourse. Looking forward to more of your work on this crucial subject!
Business Development. Partnerships
1 年Very thoughtful, well researched and well written piece, Alex! Your passion, genuine concerns about AI and the recommended actions are well stated. 100% in alignment. Thank you.
Digital & Decentralized Identity, Verifiable Credentials, Wallets
1 年Incredibly well-researched and articulated points of view, Alex. I particularly enjoyed the practical advice on what people can do today. The fact that the rise in AI will be a driver for digital trust is at least one good outcome... stakes the are very very high.