Defending against Deepfakes

Defending against Deepfakes

In my last post, I talked about how AI is introducing new threats to online businesses. I ended with a reference to the T-800 in Terminator 2, emphasising the need to verify everything. (Spoiler: the T-800 cleverly uses wordplay to detect the hostile T-1000 over a phone call, even though it perfectly mimics John Connor’s foster mum’s voice.)

This got me thinking about two things, which I'll cover in this post:

  • Verifying Identity in the Age of Deepfakes
  • Applying Threat Modelling to Deepfakes

Verifying Identity in the Age of Deepfakes

Using a verbal passphrase as a form of authenticating a person is literally taking us back to biblical times. Judges 12 talks of a battle between the Gileadites and the Ephraimites. Each group said the word shibboleth (flood) differently and it went down like this:

The Gileadites captured the fords of the Jordan River opposite Ephraim. Whenever an Ephraimite fugitive said, “Let me cross over,” the men of Gilead asked him, “Are you an Ephraimite?” If he said, “No,” then they said to him, “Say ‘Shibboleth!’” If he said, “Shibboleth” (and could not pronounce the word correctly), they grabbed him and executed him right there at the fords of the Jordan. On that day forty-two thousand Ephraimites fell dead. - Judges 12 v6

Pretty terrible outcome for getting your passphrase wrong.

So, let's look at three scenarios where deepfakes could cause issues and whether a passphrase or something else could help us deal with an imposter. We'll focus on situations where Multi-Factor Authentication isn't available—or where additional verification is still needed.

1) Person-to-Organisation - If you telephone your bank, they may ask you to confirm your address, your current balance and the value of last transaction you made. This works well for the bank—they have your data and can verify it's you by using it. But what about the other way around?

In the case of a Organisation-to-Person, a legitimate organisation will likely know one or more facts about you, but you have no set of facts to prove that they are who they say they are. The advice I can give for this scenario is:

  • Don't give out any information to them. Rather flick the script around and ask them to confirm something about you, e.g. your last transaction or how many accounts you have with them. They should be able to see this data if they are legitimate. If they get defensive, beware.
  • If they are not able to do the above, ask for their name and branch and look up the phone number directly online. Don't trust any phone numbers they give you as these may be numbers they've set up to add another level of authenticity to their scamming. Also definitely validate any search results or websites that are returned when you look them up - it's easy enough to fake a website as well!

2) Person-to-Person (Person is Known To You) - Using a passphrase with your friends or family is relatively easy. It doesn't necessarily need to be a passphrase, it could be a fact that you both know, a funny memory or what gift you got your niece or nephew for Christmas. The key thing is that if you are talking to a person over audio, or audio+video, and you feel something is off, for example any pressure or urgency to send money, ask a question that only the real person would know. Also, don't automatically trust a call just because it comes from a familiar number. Phone numbers can be spoofed and you only need as little as five seconds for voice cloning.

Person-to-Person (Person is Unknown To You) This is where things get a bit more complicated. Within your company, or even across companies/suppliers/partners you work with, you're not going to necessarily know any facts about a person who urgently wants you to pay their invoice or send documents to them. They may sound exactly like someone you know, but how can you be sure? I have some suggestions here, and if you have other ideas please let me know:

  • Move from Phone to Video - If you and the person are in the same company and speaking by phone, ask them to move the call from phone to video via your organisations MS Teams app (or equivalent). This increases the complexity for them, as they’ll need access to a laptop and will also have to fake video too.
  • Verify them via another route - If the person insists that they can't move to video then you could try and send them something via another comms channel they would only have access to. For example a random phrase to their work email address or query what their organisational hierarchy tree looks like. I accept that this mechanism is tied to a phone which the voice cloner may have stolen too, but it does raise additional security obstacles for them as any attacker would have to bypass the security controls on the work phone (e.g. facial login to the phone / access to email) AND clone a voice well enough to trick you. My advice still stands though - if it feels off, or if you feel pressured, then it's better to err on the side of caution and insist on a video call or face to face meeting when the person is able to do so.

IDEA - A useful extension for these our authenticator apps could be the ability to add individuals, not just organisations. That way you could have a ready made, rotating passphrase that you can use for another level of defence between people in your organisation. Are you listening Microsoft & Google? :-)

Person to Person Authentication - Could this be in our futures?

I've come across other guidance about using multiple channels and splitting sensitive information over these channels, but honestly should you do that? Is it ever going to be life-and-death that the thing needs to be done right then and there? I'd recommend just holding off and waiting for alternative means of communicating with the requester.

Other options are having a 'two-person' rule in the chain of command to sign off on any monetary transfers. I get the approach, it gives you double the chances for an adversary to slip up.

Applying Threat Modelling to Deepfakes

I'll not go too deep into threat-modelling here. There's great material on it out there, but in short Threat modelling is all about thinking ahead to spot and fix potential security problems before they happen. It’s a way to anticipate how someone could attack a system—like a hacker breaking into an app or stealing data—and then taking steps to prevent it.

We need to include deepfakes in our threat models, rather than focusing only on the technology flows. Anywhere that audio or video or images are used to communicate within your business should be reviewed. You should assume a position that a phone or video call could be fake.

We’re well-trained to mistrust text messages and increasingly cautious with emails—but phone calls and videos that look and sound like our boss present a new challenge for us all.

Deepfake Threat modelling - Identify any business flows during threat modelling that could be weak

Additionally, I would strongly recommend a read of the OWASP Guide for Preparing and Responding to Deepfake Events. Great content here - especially on what areas to hone in on to review for deepfake impersonation attacks.

Closing Thoughts

One year ago today, on the 21st February 2024, the EU AI Office was established. The EU AI Act came into force on August 1, 2024. Its provisions are being phased in gradually, with some regulations starting to apply from 2nd February 2025. The act will give some protection and visibility against deepfakes:

The EU AI Act addresses deepfakes primarily through transparency requirements rather than an outright ban. Here are the key points:

  1. Transparency Requirements: Creators and disseminators of deepfakes must clearly disclose that the content is artificially generated or manipulated. They must also provide information about the techniques used.
  2. Definition of Deepfakes: The Act defines deepfakes as AI-generated or manipulated content that resembles real persons, objects, places, entities, or events and appears authentic or truthful to a person.
  3. Scope: The regulation covers not just individuals but also entities like businesses and governments, acknowledging the potential for reputational or financial harm.
  4. Purpose: The aim is to empower consumers with knowledge about the content they encounter, reducing susceptibility to manipulation.

These regulations provide organisations and individuals with more tools to combat deepfake fraud. But ultimately, vigilance remains our best defence.





Monikaben Lala

Chief Marketing Officer | Product MVP Expert | Cyber Security Enthusiast | @ GITEX DUBAI in October

2 周

Davey, thanks for sharing!

John Sotiropoulos

Safeguarding national-scale projects and AI - Kainos | OWASP | USAIC | Best-selling Adversarial AI book author

2 周

Great piece there Davey McGlade loved seeing the European AI perspective and use of Threat Modelling. We at OWASP have also released a Guide to prepare and respond to DeepFake events. Would be great to have you contribute your feedback and insights to the next iteration. https://genaisecurityproject.com/resource/guide-for-preparing-and-responding-to-deepfake-events/. cc Rachel James Bryan Nakayama

Will Hamill

Head of Engineering for Digital Services at Kainos

2 周

I like how you describe Org-to-Person, something that's more about user education than building features into systems. I do the same thing now, as I suspect a company similar in name to 'Phones 4 Me' sold my details to unscrupulous parties some years back and every so often I get people calling from orgs claiming to be O2 offering me contract renewal deals. When I challenge them to tell me how much I currently pay per month, if they're really from O2, then they most often hang up on me!

Brad Mallard

CTO at Version 1, Leading AI Transformation and Innovation at scale | Top Voice 2024 (AI and Thought Leadership) | Forbes Tech Council

3 周

Super important subject and love the idea proposed Davey - thank you for the write up here - great stuff

要查看或添加评论,请登录

Davey McGlade的更多文章

社区洞察

其他会员也浏览了