Deepfakes & AI: 10 Key Takeaways from Yesterday's Webinar
Yesterday, we hosted a webinar with Simon Paterson on Deepfakes & AI in Crisis Management, and it was a wake-up call. Simon Paterson is the US Head of Counter Disinformation at Edelman, leading efforts to combat digital misinformation and AI-driven deception. With extensive experience in crisis communications and digital strategy, he has been at the forefront of tackling deepfakes and disinformation campaigns that threaten trust in media, businesses, and institutions.
The world of crisis management is already a high-stakes battlefield, but now, there’s a new weapon in play—deepfakes. AI-powered deception is rewriting the rules of trust, and if businesses and leaders aren’t prepared, they’re stepping into quicksand. If you missed it, here are the 10 biggest takeaways that you need to know.
1. Deepfakes Are More Than Just a Tech Gimmick
A decade ago, AI-generated fake videos seemed like science fiction. Now, they’re a real and growing threat. Simon made it clear: deepfakes are an existential threat to trust.
2. Businesses Are Woefully Unprepared
Despite the rapid rise of AI-generated misinformation, most organizations have no strategy in place to detect or respond to deepfakes. That’s a disaster waiting to happen.
3. Financial and Reputational Damage is Real
A single deepfake scandal can tank a company’s reputation and stock value. Studies show that misinformation-related crises lead to a 2.7 standard deviation drop in reputation scores and a 0.2 standard deviation decline in stock valuation.
4. Trust in Media and Institutions is Collapsing
We’re drowning in information, but trust is plummeting. According to the Edelman Trust Barometer 2025:
5. AI is Lowering the Bar for Bad Actors
It used to take serious skill to create a convincing fake. Now, AI tools make it easy for almost anyone to produce hyper-realistic fake content designed to deceive.
6. We’re Bad at Spotting Deepfakes
Despite all the warnings, studies show that people struggle to recognize deepfakes. Only 30% of respondents correctly identified manipulated media when primed. Even worse, 36% of older adults had never even heard of deepfakes.
7. AI Can Also Help Fight Deepfakes
The same technology that enables deepfakes can also detect them. AI tools can assist with:
8. Misinformation Spreads in Three Stages
Simon broke down how fake content spreads:
9. Companies Need a Deepfake Response Plan
If a deepfake scandal hits your brand, reacting quickly is key. Businesses should:
10. The Future of Trust is at Stake
Deepfakes aren’t just a business risk—they’re a societal risk. The more misinformation spreads, the harder it becomes to trust anything we see or hear. Organizations must act now to protect themselves and the truth.
The webinar was a clear reminder: we’re at an inflection point. Deepfakes are here, and they’re only getting more sophisticated. The question is—are we ready?
Spokesperson | Public Affairs | Crisis Management
1 周This was such an informative webinar, Kosta. Key takeaway from Simon Paterson MBE for those in crisis communications: "Speed is of the essence in your response." It reminded me of the Kevin Ashton quote: The math of time is simple: you have less than you think and need more than you know.