Why Deepfakes Are Not the Most Insidious Way AI Impacts Democracies
Edson Porto
Combining human expertise and AI to help your brand connect with the right audience and manage its reputation.
Most discussions about the impact of generative AI on elections this year have centred on how the technology allows political groups to cheaply create deepfakes or mass disinformation campaigns. It's a valid concern, given the global proliferation and influence of such campaigns.
In the United States, an AI-generated robocall impersonated President Joe Biden, falsely advising voters about the primary in New Hampshire. In Slovakia, AI-generated audio recordings falsely claimed a candidate planned to rig the election, and in Nigeria, manipulated audio clips implicated a presidential candidate in ballot tampering.
The affordability and efficiency of AI-driven misinformation campaigns enable them to reach large audiences swiftly, particularly on social media, where sensational and emotionally charged content spreads rapidly and usually before it can be fact-checked.
Fake content is a significant issue in how political actors utilise technology, but it's not the most critical. The real trouble lies in how many politicians are obfuscating the debate and attempting to benefit from the confusion.
There is a growing trend of using generative AI as a scapegoat. The phenomenon of liars using the perception of widespread deepfakes to avoid accountability has even been dubbed "liar's dividend" by American Law professors Bobby Chesney and Danielle Citron.
In this year's elections, an Indian politician claimed that genuine audio of him criticising party members was AI-generated. In Turkey, a candidate claimed a compromising video was a deepfake, although it was authentic.
This use casts doubt on all information, undermining public trust in genuine data and the possibility of a shared truth.
Modern authoritarian governments used similar tactics long before generative AI. They discredit any notion of reliable truth or independent sources to demobilise and demoralise their citizens. The uncertainty and general lack of trust immobilise the population even when faced with critical situations. Putin has again demonstrated his ability to use this method when he even avoided calling the War in Ukraine a war.
These authoritarian governments still employ more traditional tactics like eliminating the opposition or controlling the media, but these more violent approaches are becoming increasingly unnecessary in societies overwhelmed by misinformation and doubt.
领英推荐
Even without central authoritarian governments orchestrating these processes, democracies now face a similar challenge.
Trust in democratic institutions has declined globally. A recent Edelman survey shows that 59% of Australians think that political leaders "are purposely trying to mislead people by saying things they know are false or gross exaggerations".
This amplifies the perception that democratic systems are broken and increases the appeal of politicians who don't play by the same rules.
One evident consequence of this erosion of shared reality is the trend of political campaigns based on pure propaganda and emotional messages. The 2024 US election is an example. Thus far, it has been dominated by sensationalism, personal attacks, and tribalism rather than discussions on problems, policies, and solutions. This shift fosters division and fear, impoverishes the political debate, and undermines democratic institutions.
There is no panacea to address these challenges. Educating the public about AI, deepfakes, and disinformation is crucial. By improving media and information literacy, citizens can become more discerning consumers of information and better equipped to identify and reject false content.
Investing in advanced technologies to detect and debunk deepfakes and other AI-generated misinformation can help mitigate the spread of false information.
It is also important to implement and enforce regulations that require transparency and accountability from technology companies. Policies can mandate clear labelling of AI-generated content and hold creators of malicious disinformation campaigns accountable.
If democracies don't find ways to resolve the deep fake AI-generated crisis and the consequent plunge of trust in democratic systems, at best, we will see an impoverishment of political debate and policies. At worst, it can threaten the democratic endeavour entirely.
Account Manager and Deputy Chair Rising Women Leaders
3 个月Very interesting Ed! What a world we live in!