The Deep Fake-ization of Culture Has Gone Mainstream
Sameer Ahuja
Lead GameChanger, a DICK’S Sporting Goods company | Helping families elevate the next generation through sports
The impact of deep fakes is about to get REAL
Star Wars. Star Trek. Aliens. Blade Runner. Inception. Terminator. 2001: Space Odyssey. Sci-Fi fare has long entertained the masses.?
But now? These fictional tales are coming to life before our very eyes.?
To wit, Netflix’s hit series Black Mirror is today’s equivalent of The Twilight Zone. Each episode explores a new mysterious paradigm, forcing modern audiences to ponder technology and its implications on society. One 2017 episode predicted the invention of robot dogs, years before NYPD redeployed its now infamous “Digidog.”?
Now that the show has returned in 2023 after a hiatus (its showrunner halted production during the pandemic because “real life is already dystopian enough”), the first episode of the new season is "Joan is Awful." In it, the protagonist discovers a streaming service has transformed her existence into a TV series with Salma Hayek.?
Pretty cool, right? Perhaps not.?
Turns out, "Joan is Awful" is not filmed the conventional way. Instead, the show is created using advanced AI and deep fake tech, allowing for highly accurate—and highly disturbing—fabrication in near real time. (Episodes are almost instantly shot/edited/distributed on Streamberry—a tongue-in-cheek Netflix doppelganger.)?
Somehow, by signing the labyrinthine terms and conditions of the app most of us gloss over, the real-life Joan signed away all her rights. Under the contract, she is permitted her every action to be recorded and written into the ongoing show.
As you can probably guess, things get ugly quick.?
Without going too far into the plot, it turns out the real-life Salma Hayek also agreed to a unilateral contract robbing her of her own rights in this arrangement. The plight of both highlights a shared predicament: they granted this service—a stand-in for many such familiar platforms—incredible access to vast personal information in the name of convenience. Considering our own willingness to accept so many terms and conditions for nearly anything tech-related, should any of this feel unfamiliar?
You already know that answer.?
No matter whether you’re a famous actor or an average Joan, we should all care more about how our data is being surveilled and appropriated. Especially now that this information is being used to train and develop artificial intelligence.?
AI that’s creating an increasingly ersatz consensual reality.
*****
But it’s just TV, right? You may be saying to yourself.?
Not for long. This Black Mirror episode offers yet another chilling reminder of the transformative ways AI promises to use our data to produce and reproduce content. For good or bad. And just like the characters in this episode, many of us are growing increasingly uneasy over how AI is generating content. And manipulating minds.
Here’s another example.
Did you know it’s now possible to “revive” artists from the grave? Recently, the Beatles made headlines by announcing a final record using AI to replicate John Lennon. To be fair, his “voice” will be sourced from actual recordings of the late singer.
AI will aid in bringing clarity to old vocals, filling in cracks where the analog falters.?
领英推荐
Not bad, eh, Beatles fans??
John Lennon is back! Even so, Paul McCartney sounds cautious as he warns of the “trickery” AI can bring to the table. At minimum, the ethics are questionable.?
Country music icon Dolly Parton recently spoke out on this topic. In an article for Deadline, she admits, “I have to decide how much of that high-tech stuff I want to be involved [with] because I don’t want to leave my soul here on this earth.”?
But other performers have been quicker to embrace these advances.?
Following an aphasia diagnosis forcing him to retire, actor Bruce Willis recently sold his likeness to AI company DeepCake. Even cooler for fans, most of the data used to train this digital double comes from Willis’ performances in Die Hard and Fifth Element—crowd favorites. According to Willis, “It’s a great opportunity for me to go back in time.”?
That’s saying a lot from the man who starred in many a space/time bender, such as 12 Monkeys and Looper. Still, if the tech is here, and performers want their piece of the pie, then why not? Clearly, certain benefits arise from using AI, like enabling actors to still “act” despite physical limitations.?
The situation does, however, call into question the entire acting profession. Example: will human actors eventually lose roles to cost-effective “digital doubles?” What about athletes being replaced by avletes, as I wrote about previously.?
But there is an even thornier problem afoot. Whether it’s art or real life, in the coming years, we’re going to have an even harder time deciphering between what’s real and what’s artificially produced. As a society, we already wrestle with fake content, particularly the news. Now AI comes along, threatening to muddy the media waters even more. For one thing, the internet age can enable the proliferation of an infinite number of bogus news stories, making digging for the truth a very tall order indeed.?
Here’s a geopolitical example. Already, deep fakes have appeared in the Russia-Ukraine war. One deep fake depicted Ukrainian president Volodymyr Zelenskyy surrendering to Russia. And last month, hackers deep faked Russian president Vladimir Putin ordering martial law.??
Also, thanks to widespread knowledge of AI and deep fakes, forgery itself can serve as a plausible alibi. Meaning, if a video leaks online of a celebrity doing something unsavory, they can now claim it’s bogus or generated by AI. (Ultimately, this doesn’t help a public already skeptical of what it sees, hears, and reads on digital platforms!)?
So where do we go from here??
There is no one silver bullet. But I can think of two ideas right now to stop the bleeding. On the one hand, we must train ourselves and our kids to be experts at media literacy. Becoming aware of the many ways we can be misled is crucial.?
Another angle is to use tech advances for good. The U.S. military’s DARPA division has already poured millions into research on generative AI. Using sophisticated detection algorithms, more researchers can tell if videos have been doctored or manipulated. One shrewd team at the University of Albany has shown how deepfakes can be exposed via details like the number of eye blinks per minute.?
It's also helpful to consult the past to safeguard the future. Recall that as the internet became more complex, new services like cybersecurity grew into entire industries. I predict the same will occur with AI. We may very well soon see careers in “content security” as well as “authenticity tech.”?
In any case, we have a steep hill to climb. Just like on TV, deep fakes threaten to uproot consensus reality. And it’s no longer just limited to entertainment or the news. It’s everything. While we may not end up watching Salma Hayek play us on TV, the truth is these hyper-realistic fakes aren’t going anywhere. The tech is already too potent. The opportunists have too much to gain. But we can use the best innovations at our disposal to decipher between real and fake.?
As it’s been said, necessity is the mother of invention, and today’s necessities are no small matters. They’re only ramping up. To put it another way: things are about to get REAL.
Thank you for reading. If you like what you just read, please subscribe for more content. Consume at Once is about how to simplify a complex world being disrupted by technology, everywhere from content, to attention spans, to markets and geopolitics. Any opinions or forecasts contained herein reflect the personal and subjective judgments and assumptions of the author only.