Defending against Deepfakes
Davey McGlade
Global Head of Cyber Security @ Version 1 | Striving to help our customers solutions be secure
In my last post, I talked about how AI is introducing new threats to online businesses. I ended with a reference to the T-800 in Terminator 2, emphasising the need to verify everything. (Spoiler: the T-800 cleverly uses wordplay to detect the hostile T-1000 over a phone call, even though it perfectly mimics John Connor’s foster mum’s voice.)
This got me thinking about two things, which I'll cover in this post:
Verifying Identity in the Age of Deepfakes
Using a verbal passphrase as a form of authenticating a person is literally taking us back to biblical times. Judges 12 talks of a battle between the Gileadites and the Ephraimites. Each group said the word shibboleth (flood) differently and it went down like this:
The Gileadites captured the fords of the Jordan River opposite Ephraim. Whenever an Ephraimite fugitive said, “Let me cross over,” the men of Gilead asked him, “Are you an Ephraimite?” If he said, “No,” then they said to him, “Say ‘Shibboleth!’” If he said, “Shibboleth” (and could not pronounce the word correctly), they grabbed him and executed him right there at the fords of the Jordan. On that day forty-two thousand Ephraimites fell dead. - Judges 12 v6
Pretty terrible outcome for getting your passphrase wrong.
So, let's look at three scenarios where deepfakes could cause issues and whether a passphrase or something else could help us deal with an imposter. We'll focus on situations where Multi-Factor Authentication isn't available—or where additional verification is still needed.
1) Person-to-Organisation - If you telephone your bank, they may ask you to confirm your address, your current balance and the value of last transaction you made. This works well for the bank—they have your data and can verify it's you by using it. But what about the other way around?
In the case of a Organisation-to-Person, a legitimate organisation will likely know one or more facts about you, but you have no set of facts to prove that they are who they say they are. The advice I can give for this scenario is:
2) Person-to-Person (Person is Known To You) - Using a passphrase with your friends or family is relatively easy. It doesn't necessarily need to be a passphrase, it could be a fact that you both know, a funny memory or what gift you got your niece or nephew for Christmas. The key thing is that if you are talking to a person over audio, or audio+video, and you feel something is off, for example any pressure or urgency to send money, ask a question that only the real person would know. Also, don't automatically trust a call just because it comes from a familiar number. Phone numbers can be spoofed and you only need as little as five seconds for voice cloning.
Person-to-Person (Person is Unknown To You) This is where things get a bit more complicated. Within your company, or even across companies/suppliers/partners you work with, you're not going to necessarily know any facts about a person who urgently wants you to pay their invoice or send documents to them. They may sound exactly like someone you know, but how can you be sure? I have some suggestions here, and if you have other ideas please let me know:
IDEA - A useful extension for these our authenticator apps could be the ability to add individuals, not just organisations. That way you could have a ready made, rotating passphrase that you can use for another level of defence between people in your organisation. Are you listening Microsoft & Google? :-)
I've come across other guidance about using multiple channels and splitting sensitive information over these channels, but honestly should you do that? Is it ever going to be life-and-death that the thing needs to be done right then and there? I'd recommend just holding off and waiting for alternative means of communicating with the requester.
领英推荐
Other options are having a 'two-person' rule in the chain of command to sign off on any monetary transfers. I get the approach, it gives you double the chances for an adversary to slip up.
Applying Threat Modelling to Deepfakes
I'll not go too deep into threat-modelling here. There's great material on it out there, but in short Threat modelling is all about thinking ahead to spot and fix potential security problems before they happen. It’s a way to anticipate how someone could attack a system—like a hacker breaking into an app or stealing data—and then taking steps to prevent it.
We need to include deepfakes in our threat models, rather than focusing only on the technology flows. Anywhere that audio or video or images are used to communicate within your business should be reviewed. You should assume a position that a phone or video call could be fake.
We’re well-trained to mistrust text messages and increasingly cautious with emails—but phone calls and videos that look and sound like our boss present a new challenge for us all.
Additionally, I would strongly recommend a read of the OWASP Guide for Preparing and Responding to Deepfake Events. Great content here - especially on what areas to hone in on to review for deepfake impersonation attacks.
Closing Thoughts
One year ago today, on the 21st February 2024, the EU AI Office was established. The EU AI Act came into force on August 1, 2024. Its provisions are being phased in gradually, with some regulations starting to apply from 2nd February 2025. The act will give some protection and visibility against deepfakes:
The EU AI Act addresses deepfakes primarily through transparency requirements rather than an outright ban. Here are the key points:
These regulations provide organisations and individuals with more tools to combat deepfake fraud. But ultimately, vigilance remains our best defence.
Chief Marketing Officer | Product MVP Expert | Cyber Security Enthusiast | @ GITEX DUBAI in October
2 周Davey, thanks for sharing!
Safeguarding national-scale projects and AI - Kainos | OWASP | USAIC | Best-selling Adversarial AI book author
2 周Great piece there Davey McGlade loved seeing the European AI perspective and use of Threat Modelling. We at OWASP have also released a Guide to prepare and respond to DeepFake events. Would be great to have you contribute your feedback and insights to the next iteration. https://genaisecurityproject.com/resource/guide-for-preparing-and-responding-to-deepfake-events/. cc Rachel James Bryan Nakayama
Head of Engineering for Digital Services at Kainos
2 周I like how you describe Org-to-Person, something that's more about user education than building features into systems. I do the same thing now, as I suspect a company similar in name to 'Phones 4 Me' sold my details to unscrupulous parties some years back and every so often I get people calling from orgs claiming to be O2 offering me contract renewal deals. When I challenge them to tell me how much I currently pay per month, if they're really from O2, then they most often hang up on me!
CTO at Version 1, Leading AI Transformation and Innovation at scale | Top Voice 2024 (AI and Thought Leadership) | Forbes Tech Council
3 周Super important subject and love the idea proposed Davey - thank you for the write up here - great stuff