Navigating the Threat of Deepfakes
Alliance of Democracies Foundation
We are a non-profit organization dedicated to the advancement of democracy and free markets across the globe.
In 2018, we convened the Transatlantic Commission on Election Integrity. It came out of an op ed written by Joe Biden in which he called for a?9/11?style commission to look into Russian election interference in the 2016 US election.
The premise of our commission was that we would not re-litigate 2016 but look to the future, and encourage more cooperation among allies on this common threat.?
At the first meeting of the TCEI we played the members (including Joe Biden, Nick Clegg and others) a crude deepfake audio of Donald Trump declaring nuclear war on Russia. It had been produced with a lot of work by programmers using several hours of Trump audio to produce a 30 second clip.?
It wasn’t great, but it was enough to make a Portuguese friend burst into tears as they believed it to be the start of nuclear Armageddon.?
In 2018 we produced this work to highlight how deepfakes would change the democratic discourse. Six years on and we are seeing that warning come true, and the all too familiar picture of platforms and governments scrambling to respond to a threat we’ve seen coming for several years.?
What is the true threat of deepfakes?
The initial fear was that one video published just before a poll could sway a tight election. This may have been the case in Slovakia’s elections last year - where Russia deployed fake audio just before polling day of liberal leader??ime?ka 'admitting' to rigging the election.?
However, so far we’ve seen more use of manipulated and AI-produced video by candidates than by foreign malign actors. In Argentina both camps used deepfake video. In the recent Indonesian election former dictator Suharto appeared in a video, despite being dead for 15 years.?These are just the tip of the iceberg.?
The fundamental aim of disinformation is not necessarily to make people believe the lies; it is to make people unsure of what to believe. When this happens, in a world where anyone with an opinion can make it go viral, people will start to believe whatever fits their biases. People who were hurt by Covid lockdowns may wish to believe it was all a big conspiracy, for example, perpetrated by opaque groups.?
Disinformation also works best when it contains a grain of truth, but is stripped of all context. It doesn’t need to be believable for people to share it.?
领英推荐
What can be done??
Platforms will argue that it is difficult for them to know where to draw the line. It does not take AI to deceive people and a lot of times disinformation is just recycled video made to look like it fits a certain situation, or crudely edited video to show one side of a story. Are platforms responsible for policing this??Where do we draw the line between free speech and disinformation?
So instead, platforms are turning to technology itself (something we suggested back in 2018, incidentally). AI produces the videos and audio and it can detect it, and either label or remove it.?
The recent Taylor Swift scandal has focused minds of tech companies to show they take the threat seriously. At the Munich Security Conference most major tech companies signed a new pact to implement 'reasonable precautions' against AI-enabled disinformation.?X/Twitter is notably absent.?
We can only hope that this effort to self-regulate leads to clear and concrete action from all signatories. However, some cynics have seen the pact as a way to stave off the threat of further regulation.?
Just one look at OpenAI’s new text-to-video Sora app shows that this technology is moving far faster than government and regulators.?
The answer therefore requires a number of levers to be pulled, from action by platforms, efforts to watermark manipulated content, and for governments to discourage dissemination of harmful comment (for example UK Conservative Minister Tom Tugendhat has several times taken to Twitter to discourage the sharing of fake videos of his Labour opponents).?
But above all it needs for responsible politicians to stop deploying the playbook developed by the autocrats. The best campaigns target emotional drivers - that won't change - but parties and candidates can ask themselves whether their game tactics are tearing up the playing field itself.?
Deepfakes are here to stay, so society must adapt and education must change. People need to be imbued with a healthy scepticism of what they see, to take a second to ask if it’s real before sharing it, and to understand that sharing even blatantly faked video can harm our free societies.?
You might not believe what you are seeing, but - as my Portuguese friend showed - someone will.?
Written by: James Holtum