Deepfakes Have Arrived
Deepfake technology was on full display during the recent elections in Pakistan, as jailed former Prime Minister Imran Khan was able to rally support for his party with the assistance of artificial intelligence. Through the use of AI-powered voice cloning, he was able to disseminate his message to millions from behind bars.
Last week, I noted in a piece titled “Elections 2024: Can Democracy Endure?” that many Pakistanis believed that Khan had been the target of the establishment that created fictional corruption charges to put Khan behind bars. Meanwhile, former Prime Minister Nawaz Sherif, who had been convicted of corruption charges returned from self-imposed exile to courts that overturned their prior rulings.
Lots of ordinary citizens on the streets believed democracy in Pakistan has been hijacked by the military and elites and that they were weaponizing the powers of the government to attack a political opponent. The public began describing the process as a “selection” rather than an election. Khan’s party was expected to lose out to the military’s preferred party, The Pakistan Muslim League (Nawaz).
But then, according to the New York Times,
The New York Times went on to praise Khan’s use of AI to combat voter suppression but also quoted Toby Walsh, author of the forthcoming Faking It: Artificial Intelligence in a Human World, as noting that “at the same time, it’s undermining our belief in the things we hear and see.”
It’s also important to note that Pakistan’s literacy rate (percentage of citizens over 19 years of age who are proficient in reading and writing) is around 58%. This means 42% of the population would be unable to read newspaper stories or pamphlets promoting one party or candidate over another. For comparison, the United States literacy rate is 79%. This means audio and video content has a disproportionately large impact in a country like Pakistan.
Deepfakes are not just turning up in Pakistan. They also surfaced this weekend in Indonesia, where one of the political parties generated a deepfake video of Indonesia’s second president, former General Suharto, to highlight the importance of the upcoming vote.
Let’s not forget that Indonesia is the world’s largest muslim nation and has approximately 200 million registered voters. Reaching (and influencing) even a small percentage can change results of an election. And, according to CNN, “the online world plays a huge role in Indonesian politics. In a country with on elf the world’s highest Internet usage rates, almost all political parties and politicians maintain strong presences on social media to amass followers and clout.”
When the stakes are high, contenders for the top job will use every tool at their disposal, and artificial intelligence is proving to be a seductive option. The campaign team of Defense Minister Prabowo Subianto, a leading candidate in the upcoming election, recently admitted to using AI to “give their chief a cuddly animated makeover on TikTok to appeal to young voters…[because] Indonesians aged 40 and younger — who number around 114 million voters — make up a majority of votes.” The campaign also used AI-generated children in videos to avoid tripping on legal restrictions that ban the use of children in political commercials. Meanwhile, the other campaigns are using AI-powered chatbots to engage voters.
领英推荐
And lest we think such AI-powered campaigning is constrained to faraway elections taking place in emerging markets, time to think again. In late January, a deepfake audio of President Joe Biden encouraged potential voters to stay home during the New Hampshire primary, and the audio was distributed by traditional robocalls.
So What?
The increasing use of deepfake technologies presents huge challenges to individuals and society alike. Our human capabilities are not suited to question the authenticity of images and sounds that appear familiar. Deepfakes create a disorienting perception of reality that erodes our belief in any objective truth. The mental strain of dealing with such dynamics is enormous, and may, as I suggested in my annual set of predictions, compound an already problematic mental health crisis as everyone starts viewing every piece of information with suspicion. But their impact on elections is likely to be visible soon.
A Wall Street Journal essay titled “The Deepfake Dangers Ahead” described what such suspicion might practically mean: “soldiers might not trust actual orders, and public may think that genuine scandals and outrages aren’t real. A climate of pervasive suspicion will allow politicians and their supporters to dismiss anything negative that is reported about them as fake or exaggerated.” Henry Farid, a professor at the University of California who studies digital propaganda misinformation, has called suggested that deepfakes have created a “liars dividend” that provides wrong doers caught on video or audio with “plausible deniability.”
One of the most memorable movie scenes from my childhood was of a little girl ominously describing the arrival of a paranormal disturbance with a concise phrase that is as memorable today as it was when the movie was released in 1982: “They’re heeere!” The movie, Poltergeist, was based on a story by Steven Spielberg and was remade in 2015.
Well, it seems we can use the same phrase to describe the ominous arrival of deepfake technologies into our modern world. They’re heeere! And they’re unlikely to go anywhere soon. Time to get ready for a world where you question everything you hear and see, with major implications for individuals, politics, democracy and society at large.
VIKRAM MANSHARAMANI is an entrepreneur, consultant, scholar, neighbor, husband, father, volunteer, and professional generalist who thinks in multiple-dimensions and looks beyond the short-term. Self-taught to think around corners and connect original dots, he spends his time speaking with global leaders in business, government, academia, and journalism. LinkedIn has twice listed him as its #1 Top Voice in Money & Finance, and Worth profiled him as one of the 100 Most Powerful People in Global Finance. Vikram earned a PhD From MIT, has taught at Yale and Harvard, and is the author of two books, Think for Yourself: Restoring Common Sense in an Age of Experts and Artificial Intelligence and Boombustology: Spotting Financial Bubbles Before They Burst. Vikram lives in Lincoln, New Hampshire with his wife and two children, where they can usually be found hiking or skiing.
Superforecaster? at Good Judgment Inc
6 个月Thanks Vikram. I am still skeptical of all the fuss, and I have tried to explain why in a short article here, written last November, mainly about the deepfake usage in the recent Argentinian election: https://www.dhirubhai.net/pulse/prophesied-misinformation-armageddon-probably-tsatsoulis-7n36f/ . I am currently working on an update; for what it's worth, like in Argentina, it would seem that the deepfake used in both Indonesia and Pakistan you mention here were also not exactly of the expected (and feared) flavor. And in the NYT piece you have quoted, Yan Zhuang even notices that "Mr. Khan’s videos offer an example of how A.I. can work to circumvent suppression"...