The Deep Fake Regulatory Dilemma and the Power of Pre-bunking
Forrest Alonso Haydon
Chief Project Officer @ MJV | Building Custom AI Agents for Everyone
Why Open Source Deep Fakes Are Exceptionally Challenging?
The proliferation of open-source AI technologies has significantly democratized the creation of deepfakes, making detection at scale – and, ultimately, regulation – practically impossible. The world of open-source AI often parallels ?or even surpasses closed-source players such as OpenAI and Google in model development.?
At worst, open-source technology plays incredibly fast catch-up, and can reverse engineer almost any AI technology produced by the big players. At best, it’s the future of AI development.?
Now, thanks to open-source AI, having a local text-to-video generator is practical for anyone who can install Microsoft Office. This is making deepfakes a problem that is extremely difficult to? regulate.??
The Challenging Nature of Deepfakes
The accessibility of open-source tools for generating deepfakes has outpaced regulatory and detection efforts, posing significant challenges to maintaining digital integrity.?
Traditional and new detection methods have not been scientifically proven able to understand the nuanced complexity of these AI-generated forgeries, often failing to distinguish them from genuine content. And in fact, there is growing evidence that the cottage industry of AI detection models is riddled with scams and false claims.??
This introduces all types of regulatory questions →
How can we shut down bad actors, if we can’t assert whether something is AI-generated or not??
How can we write policy around something that’s inherently non-deterministic??
Policy is supposed to guide decisions that achieve rational outcomes. But AI is not rational, and even the best policy engineers can’t force it to make a particular decision.?
These questions are urgent, and there’s not a singular legislator or policy professional on Earth who is close to figuring out these issues, given the complexity of the landscape.
领英推荐
?
Global Incidences and the Potential for Misinformation
We’re only three months into 2024, and the impact of deepfakes has been immense.?
Deepfakes have made their mark globally, with incidents ranging from manipulated political speeches to falsified videos of public figures.?
Deepfakes of political leaders like Barack Obama and Donald Trump stirred confusion and controversy. We’ve already highlighted other examples of this occurring in India and Pakistan to influence recent elections.
These instances underscore the potential of deepfakes to deepen societal divisions, and challenge the foundation of trust in digital media.
The Promise of Pre-bunking Campaigns
Given the limitations of detection and regulation, “pre-bunking” campaigns emerge as a proactive solution. By educating the public about deepfakes before they encounter them, we can foster a more discerning and critical approach to digital content.?
This strategy empowers individuals to question and verify the information they consume, reinforcing the fabric of digital literacy and resilience against misinformation. We are particularly encouraged by the recent Prebunking with Google campaign launched in the EU. Check out this video they did about the Ukraine War and pre-bunking misinformation about it.
In Conclusion
As we navigate the challenges of digital deception, the journey towards a solution lies in the power of informed awareness. Through pre-bunking campaigns and a collective commitment to critical digital consumption, we can confront the challenges posed by deep fakes, safeguarding our digital dialogue and the integrity of our shared realities.
Reminds me of the movie, "Catch me if you can." The AI landscape will become a new game of cat/mouse. There will be geniuses on both sides pushing further and harder to outwit the others. I'd want to reverse engineer the landscape so it is not so accommodating to bad actors. The easy answer is heightened consequences... Storytelling is typically the most powerful motivator for change.