How Will We Ever Be Safe From AI Deepfakes?

How Will We Ever Be Safe From AI Deepfakes?

As we watch AI-enabled deepfake tools get better and better, it is easy to get caught up in the mania that AI-enabled deepfakes will lead to a whole new level of criminal activity. And for sure, AI-enabled deepfakes will allow criminals to do additional, more realistic scams. But we do not yet know what the overall impact will be. Will AI-enabled deepfakes end up causing a ton of new crime or just a small percentage? Anyone giving you an answer today is just guessing.

I have been doing cybersecurity for over 35 years. I have been through many cycles of mania where the latest and greatest technology is going to supposedly be the tipping point for immeasurable scams. Does anyone remember the metaverse? Our virtual selves were all going to get scammed in the virtual world. Or how cryptocurrencies were going to take out traditional finance and allow costless transactions? Or how blockchain was the answer to everyone’s computer security problems? Or how multifactor authentication (MFA) was going to stop 99% of cybercrime?

Any time a new, significant technology is introduced, there are a bevy of proponents and opponents arguing how that one technology is going to be the tipping point event for either humans all being out of jobs or for letting cybercriminals run even more rampant.

I like to remind people of two things. Initially, the world was worried about computers putting everyone out of a job back in the 1950s. The 1957 Desk Set film (https://en.wikipedia.org/wiki/Desk_Set), starring Katharine Hepburn and Spencer Tracy, basically discussed how the predecessor of Google was going to put all researchers out of a job. Nearly 70 years later, that is not the case.

Second, I am not sure if AI-enabled deepfakes are going to be the tipping point for criminals, but things are already really, really bad without AI-enabled deepfakes pervasively roaming the world. Half of all Internet traffic is malicious. About a third of all Internet traffic is created by malicious bots. Seventy to ninety percent of all successful data compromises are due to emails without AI help, often full of typos and mistakes. Good luck trying to buy or sell something on a common Internet marketplace without being scammed. I mean, it is pretty bad already.

I do think that AI will absolutely change the world. It already is. AI will change nearly every job in the world, either directly or indirectly. AI will cause entire categories of jobs to disappear. But AI will also create entire new classes of jobs we did not have before and improve most of the others. How do I know? Well, that is what is happened during every previous technological advancement. Calculators did not mean we did not need mathematicians anymore. Google did not get rid of researchers, they just use Google now. The best movies and stories will never come from an AI.

AI will absolutely change the world and AI-enabled deepfakes will make cybercriminals more successful. But how much more? We do not know. I am in the camp that says it will make it worse, but perhaps only a few percent worse than it is already today. Why? Well, it is not like we will not respond to AI-enabled deepfakes. They are not going to come along, and we simply do nothing. Like every other past threat, we will respond sufficiently to the threat (or near abouts).

Defenses

What are those defenses? What will they include?

Education

Education will be a big component of it. Yes, most people know about AI-enabled deepfake technologies. They also know they are not being used a lot by criminals yet. If you get successfully compromised by an attacker this year, odds are it will not involve an AI-enabled deepfake. This will change over time. Within a short period of time, we will absolutely see more scams conducted using AI-enabled deepfake technologies. And when that happens, people will see the news stories and be formally educated about them. People will be educated about how to recognize them and how to appropriately mitigate and report them. Just like any other threat.

Even if the scam is really, really, really good…it is still a scam. The scammer still has to ask for money, login credentials, or something of value. If your boss sends you an email or a Zoom video call asking you to transfer millions of dollars to some vendor or bank you have never heard of before, you are likely to feel unsettled about it, whether or not a good, accurate deepfake is involved.

Teach your users to be skeptical about any message…no matter how it arrives (e.g., email, SMS, phone call, in-person, etc.) if it contains these two attributes:

  • It is unexpected (i.e., you were not expecting it)
  • It is asking you to do something you have never done before (at least for that requestor)

So, if your boss or team is calling on Zoom asking you to transfer lots of money not using the previously approved company policies, maybe you should check that out using alternate, more trusted, legitimate methods before following the approved action.

This advice, about having a culture of healthy default skepticism is harder than it sounds. The scammer’s messaging will almost always create an incredible sense of urgency. They want to get the potential victim moving without thinking. So, educate your users about AI-enabled deepfake attacks. Tell them they cannot immediately trust “great-looking” emails, calls, or videos that seem out of the ordinary. Anything digital can be faked. We just need to start acting more like that is the case. And it is already the case.

Educate users about AI-enabled deepfakes, how to recognize them, and how to appropriately report.

DMARC

Something very traditional can help us.

Domain-based Message Authentication, Reporting and Conformance (DMARC), Sender Policy Framework (SPF), and Domain Keys Identified Mail (DKIM) are related global anti-phishing standards that allow email receivers to verify if an email that claims to be from a particular sending domain is really from the domain it claims. In short, it helps to prevent email domain spoofing.

Email senders can use DMARC, SPF, and DKIM, to protect their email domains from spoofing by spammers and phishers. Email receivers can use DMARC, SPF, and DKIM, to verify that received emails were truly sent from the domains claimed. Every organization sending or receiving email should enable DMARC, SPF, and DKIM.

If you want more details on DMARC, SPF, and DKIM, consider this one-hour webinar where I presented on the subject: https://info.knowbe4.com/implementing-dmarc.

So, if a user receives an AI-enabled deepfake email claiming to be from a legitimate person or brand, a receiver (or really, their email client or server), using DMARC, SPF, and DKIM, can verify if that email really came from the legitimate domain. If it did not, then the email should be viewed with heightened skepticism unless later on proven legit. The only problem with DMARC, SPF, and DKIM, is there are no widespread equivalents for phone calls, SMS, and other messaging avenues (at least so far).

AI-Detecting Tools

The good actors invented AI. The good actors have been using AI longer than the bad actors. The good actors are investing many billions of dollars, above and beyond what the bad actors are, in AI.

Every single morning, KnowBe4 has a company meeting. For over a year now, that meeting has mostly focused on AI. All employees are being incentivized…quite convincingly…to learn and use AI. We have cash contests for who uses AI the most. Our developers are developing a ton of new features, most having AI in them in some way or the other. If I could summarize my company’s most singular focus over the last year or two, it would be AI. We have been using AI in our products to help better protect our customers for over six years now.

And I think most cybersecurity companies would say the same thing (i.e., that they have long been using AI and are developing more sophisticated AI features). If they are not, they will not be in business for much longer.

I think that for the first time in my 35-year career, I see a technology…AI…that will benefit the defenders better than the attackers. The defenders have been using it longer and are dedicating more time and dollars to it. I know that AI will be used to better recognize and stop malicious attacks. AI will eventually be better at recognizing disparate patterns and commonalities initiated by attackers than a human.

For example, these days, the average malicious domain created and used by a malicious hacker lasts only hours…maybe a day at most. So, it is very difficult for a human-based defender to detect the malicious domain, report it, and block it in a timely manner. But an AI-driven system can recognize and block these sites in seconds, especially when that AI is being fed a bunch of phishing emails that are being sent to different people from the same place.

A human may have a hard time recognizing AI-enabled deepfakes, but an AI can look at that same video or audio and see clear signs that it was created using AI. What I want to say is that AI can recognize and mitigate AI-enabled deepfakes better than humans. For the first time in my career, I see a technology that has a good chance of being used better by the defenders than by the attackers. Most of the time, the defenders are simply playing catch up based upon what the attackers are newly doing.

AI has the potential of being able to detect new attacks faster and better than humans. And that means faster detections and better responses. We will see. What this comes down to is a battle between the good actors’ AI-enabled threat hunting bots versus the bad actors’ AI-enabled malicious bots, and the best algorithms will win.

In summary, I, like everyone else, is worried about AI-enabled deepfakes. But how much worse they will enable cybercrime to be is not yet known. History will show who took advantage of AI more; was it the defenders or the attackers? My bet is on the defenders. For the first time in my career, I feel hope for the future.

Saif Zia

SEO | On-Page | |Off-Page | Technical SEO |Keyword Researcher | Cybersecurity | Streaming | Research Analyst | Gamer | Esports Analyst | Team Lead SEO |

5 个月

Dear Roger I'm writing to you today as a big admirer of your work in cybersecurity and tech. My team and I recently wrote a blog post on the growing use of deepfakes in the entertainment industry, particularly focusing on the associated privacy risks (https://www.screenbinge.com/resources/deepfake-in-entertainment/). With data breaches expected to become commonplace by 2024 and Trojan malware posing a significant threat, the potential for misuse of deepfakes is concerning. We've even seen examples targeting celebrities like Tom Cruise and politicians like UK Prime Minister Rishi Sunak. Given your expertise, I believe your insights on this topic would be invaluable to our readers. We understand you're busy, but if you have a few moments, I'd be honored to hear your thoughts on our blog post. Furthermore, I would be grateful if you'd consider including a link to our article as a complementary resource for your readers. It could provide them with valuable additional perspectives. Thank you for your time and consideration.

回复
Noah Kjos

Co-Founder @ DeepTrust | Voice and video call security built for social engineering and deepfakes | Author of Noah's Ark newsletter

6 个月

This was great Roger Grimes! Loved the context and practical advice on this issue.

回复
Brett Hill

"The Mindful Coach?" | Tech Entrepreneur & Mindful Leadership Pioneer | Founder, Mindful Coach Association | Creator, The Mindful Coach Method? | Former Microsoft Tech Evangelist | ICF Coach

6 个月

I posted an article other day on how mindfulness can help us in the new AI world. Ironically, it was written almost entirely by ChatGPT which I state clearly in the opening. The article was much better than I anticipated, so I put it through a few "see if this was written by AI checkers and they said it was 100% human. (at least it didn't use the word "delve," which for some reason is a favorite of ChatGPT. ) Deepfakes are very different animal, to be sure. But I am not sold on th capacity of AI to detect AI. It wont be 100% reliable so theres the question of people calling deepfake on things that arent, as well - in order to further muddy the water. It seems to me there needs to be some kind of "source of origin" validation - trusted source capability built into the protocol, like with TLS/SSL, to validate the source of a digital product. A way to "verify author" easily that is built into the spec. That would go a long way toward proving whether something was authoritative or not.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了