The Appeal (and Danger) of Deepfakes

The Appeal (and Danger) of Deepfakes

If you haven't had the opportunity, there's a fantastic example of a deepfake video wherein actor Bill Hader is telling a story about meeting Tom Cruise. At nearly 10 million views (at the time of posting this article), it's safe to say that there's an undeniable appeal to these videos. The popularity of deepfakes has rapidly grown over the past couple of years thanks in part to internet users absurdly swapping Nicolas Cage's face onto the faces of actors and actresses in clips from other films.

Deepfakes are made possible by leveraging artificial intelligence; creators use Generative Adversarial Networks (GANs) to create new data by training on existing datasets and it's a case where the more is the merrier. For example, Nicolas Cage has been in more than 100 films, providing a lot of data sets and allowing him to infiltrate the uncanny valley in ways we never could have dreamed. But other than conjuring convincing impersonations, such as upscaling low resolution films and video games, de-ageing actors (or in some cases, even bringing them back from the dead, as with Peter Cushing's Grand Moff Tarkin in Rogue One: A Star Wars Story), what other uses do deepfakes have?

Despite all the beneficial uses for the technology, deepfakes have unfortunately proven effective at taking business email compromise (BEC) scams to the next level. In 2019, the CEO of an unnamed U.K.-based energy company was deceived into believing he had been called by his boss, the chief executive of the organization's German parent company, and proceeded to follow his instructions and transfer €220,000 (approximately $243,000) to the bank account of a Hungarian supplier.

This is believed to be the first example of a deepfake-generated voice being used to perpetrate a scam…. But it wouldn’t be the last.

In another example, a deepfake-created voice was used to convince an employee to pay an overdue invoice. To further cement the legitimacy of the request, a follow up email that appeared to be from the "executive" was sent to the employee, along with the same financial information as means of corroborating the phone call, reinforcing the urgency for the employee to rush the payment.

Zohaib Ahmed, CEO of Resemble AI, astutely described this voice-cloning technology as being “Photoshop for voice."

It is feasible to think that an unnatural sounding voice is the result of a bad phone signal and if the call was made to a junior employee hastened to make an urgent payment by an irate CEO, then they may not stop to consider the implications. As the technology improves, the size of the datasets required to create a deepfake voice is decreasing. The internet has plenty of examples that show how an individual can record themselves saying only a few key phrases before having a compelling clone of their voice. Combine this with traditional social engineering methods and cold calling and it won't be long before anybody can convincingly pass for the CEO of a multinational corporation.

How can businesses guard against this updated form of attack? The U.S. Air Force provides a great example of an effective safeguard. Its "Two-Person Concept" is a tamper-control measure designed to prevent accidental or malicious launch of nuclear weapons by a single individual. Launch orders must be confirmed by two officers in order to launch such an offensive.

Admittedly, this is an extreme example, but having procedures in place in urgent scenarios requiring two-person authorisation, or using "safe words", adds a further layer of security and could prevent both personal embarrassment and financial loss. 

要查看或添加评论,请登录

Steve Goddard的更多文章

  • Global Fraud Risks and Trends: Insights from Merchant Risk Council experts

    Global Fraud Risks and Trends: Insights from Merchant Risk Council experts

    Merchant Risk Council (MRC) is a global membership organisation that brings together all payments ecosystem…

    1 条评论
  • You Can Mimic a Result, but Not the Creativity

    You Can Mimic a Result, but Not the Creativity

    A recent survey by MoneyLIVE and Featurespace found that 93% of respondents agreed that fraudsters have become more…

    1 条评论
  • 6 degrees of Fraud

    6 degrees of Fraud

    I recently came across a post on LinkedIn that stopped me in my tracks: an individual looking for work. It wasn't what…

    5 条评论
  • On High Alert for Seasonal Spending in Lockdown

    On High Alert for Seasonal Spending in Lockdown

    I've always dreaded this time of year because in previous roles, it meant my team and I would be facing an onslaught of…

  • Connectivity (not Covid) is to Blame for More Digital Crime

    Connectivity (not Covid) is to Blame for More Digital Crime

    The darker side of humanity has always used natural disasters and conflict to conceal their nefarious activity and prey…

  • The New Normal II: Reopening

    The New Normal II: Reopening

    Things are changing (again). As many of us slowly emerge from our home-offices and businesses, shops and restaurants…

  • Cybercrime – Fact vs. Fiction

    Cybercrime – Fact vs. Fiction

    "…increasing the perception of the risk associated with cybercrime may lead it to be perceived as more exciting, and…

  • How Can We Keep Contactless Payments Secure?

    How Can We Keep Contactless Payments Secure?

    Mastercard has recently announced efforts to increase contactless limits in the Middle East, Africa and Canada…

  • Can you ever overtake a fraudster?

    Can you ever overtake a fraudster?

    There's more to fighting fraud than catching and overtaking the fraudster, it's also about reducing the gap (and…

    2 条评论
  • On Scams and My British Gas "Account"

    On Scams and My British Gas "Account"

    I recently received the following email from (purportedly) British Gas, despite not having an account with them for…

社区洞察

其他会员也浏览了