Combating Deepfake Fraud is a Growing Challenge for Organizations

Combating Deepfake Fraud is a Growing Challenge for Organizations

During a virtual conference call with organizational leaders from across the globe, employees from renowned engineering firm, Arup, were victimized by an elaborate scam. Unbeknownst to attendees, threat actors had infiltrated the meeting and comprised numerous individuals on the call. The leaders were fake. Other team members, invented. The Chief Financial Officer (CFO) was only present in name, as his image and voice were AI generated. The CFO’s voice was a spot-on clone, overriding thoughts of a simple social engineering attempt. Despite this elaborate fraud, one Hong Kong-based finance employee was so convinced of its legitimacy they wired $25 million in a string of transactions to accounts established by the threat actors (1).

This is but one of many anecdotes demonstrating the rise of an emerging cyber threat: the deepfake.

What is a Deepfake?

A portmanteau of deep learning and fake, the term deepfake refers to a type of synthetic media—image, video, audio—designed to seem legitimate. Often, but not always, deepfakes are used to manipulate or convince the target(s) that something false is, in fact, true. These invented media can mimic existing artifacts or be completely new, authentic appearing content. For instance, the Arup fraud case involved altering video and audio files of the CFO, so it appeared he was saying something he never said. While deepfakes can be used for entertainment, educational, and creative purposes, they also pose significant risks for misinformation, fraud, and identity theft.

The above mentioned deepfake is among the most complex examples, though there are more standard forms people can expect to encounter more regularly. For instance, one might receive a call from somebody claiming to be and who sounds remarkably similar to a team leader. Or, there might be a video call from someone claiming to be company security or IT imploring your personal information to engage a fraudulent password change. Or you might be confronted with a deepfake video of your CEO giving an invented presentation, fake articles about organizational changes, or manipulated pictures. Each of these may be a culminating instance or initial stage in a more elaborate social engineering scheme.

Whatever the medium or reason, deepfakes are increasingly problematic for individuals and organizations.

The Challenges of Deepfakes

The rise of AI technology has been accompanied with broad application and rapid user adoption rates. For context, ChatGPT —an industry leader in generative AI technology—achieved 100 million active users in two months, outpacing TikTok (9 months) and Instagram (2.5 years) (2). Two years later, the hundreds of millions of individuals who engage with ChatGPT and similar software include malicious actors who use the technology to assist in creating and disseminating fraudulent content with relative ease and impressive realism.

Even those who do not engage with generative AI technologies are largely aware of its existence and potential negative impact. For instance, every election now comes with warnings of potential deepfake perpetrated fraud (3) and alerts of industry specific threats seem ubiquitous (4). Knowledge of dangerous deepfakes is often enough to erode trust in legitimate institutions and can confuse users into thinking everything is fake. This challenging space and general skepticism are, in a sense, a tremendous opportunity for threat actors to exploit overwhelmed users.

One potential blindspot that can be used to manipulate users is on the back of the psychological phenomenon of confirmation bias. There is a tendency for individuals to believe to be true something they want to be true or everything something that appears true (5). Combined with the ease of impersonating authority figures in appearance, speech, or text and simplicity of inventing content, the average user is at a disadvantage. How are we supposed to know what is true and what is fake?

Confirmation bias is additionally problematic when combined with deepfakes because individuals are not great at distinguishing between the real and fraudulent (even though we like to think we are). According to research published by the Universiteit van Amsterdam , people express great confidence in being able to identify and avoid being convinced by deepfakes but, when confronted with them, cannot detect them with significant accuracy6. It is basically a coin flip. And whether we accurately detect a deepfake or not, additional research indicates our “attitudes and intentions” can be greatly impacted by content we know is fake (7, 8).

Deepfakes and the Cyber Landscape

Because users are often the proverbial ‘front line’ of defense for organization’s against cyberattacks, it is incumbent on every individual to become more educated on the threats and understand how to properly respond in the face of potential deepfake content. The first and simplest step would be for organizations to mandate security awareness training that exposes users to threats, offers defensive insights, and increases positive behavior. As was noted with the Arup example, enhanced vigilance and layered security solutions are required, in addition to ongoing training, to properly combat deepfake media. The error of one person should not result in a $25 million dollar loss.

Another troubling deepfake trend is that some AI programs have manipulated stolen credentials to bypass established protections, like biometric scans. In one such instance, an individual’s identification card was stolen, aspects of it then changed, ending with threat actors who “use the falsified photo to bypass [the employee’s] institution’s biometric verification systems (9)”. Theoretically, as AI technology continues its exponential increase in sophistication, the areas of fraud expand: retinal scans, facial recognition, voice confirmation.

With deepfakes accounting for more than 40% of all “fraud attempts across video biometrics”, it is imperative for private, public, and corporate institutions to coordinate in ways that limit the breadth and impact of deepfake threats (10).

One way this is already happening is through the inclusion of watermarks on AI generated content. 谷歌 , whose AI software Gemini has been used for legitimate and deepfake purposes, is among those leading this charge (11). The result of this effort, which still requires some user education, would limit the reach and impact of fraudulent media.

Risk management through layered cybersecurity

While no cybersecurity solution acts as a panacea, adopting a layered approach can significantly improve the cyber hygiene of effective organizations. What layers are necessary? The first, which is worth mentioning again, is requiring ongoing security awareness training for every person involved in an organization, including 3rd party vendors or contractors. In the case of wire fraud as noted in the Arup case, organizations should create processes for funds transfers requiring in person or private connection verification before any money can move.

Multi-factor authentication (MFA) is also necessary, as it protects against credential theft. Another is the strict enforcement of a complex password policy requiring regular changes—credential theft is still the greatest single manner of initial access in a data breach. Additionally, new technologies like liveness detection software help organization’s verify video, audio, or photographic evidence by “ensuring that users are dealing with genuine documents rather than digital imitations or photocopies (12).”

Beyond these standard cybersecurity measures, contending with deepfakes may require more innovative practices. For instance, exposing employees to simulations using deepfakes will not only raise awareness of the challenges but provide real-world scenarios making identification and proper response more likely. To recognize unusual speech patterns or unrealistic edits and boundaries that are hallmarks of deepfakes requires repeated exposure and practice.?

All of these gaps and vulnerabilities could be uncovered and corrected as a result of proactive cybersecurity assessments, including Incident Response Planning, Risk Quantification assessments, and regular Penetration Testing. Furthermore, organizations should work with experienced cybersecurity professionals to establish 24/7 Security Operation Center monitoring of all endpoints and email tenants to safeguard against fraud and unauthorized network access.

***

Unfortunately, deepfakes are here to stay. As the fraudulent media becomes increasingly sophisticated, it is crucial that individuals and organizations work proactively to safeguard operations and sensitive information. The Arup incident—while not a singular event—serves as a stark reminder of the potential losses and disruptions that can arise from such threats. Act now to protect your organization and people from the deceptive dangers of deepfakes.

Sources

  1. Magramo, Kathleen. “British Engineering Giant Arup Revealed as $25 Million Deepfake Scam Victim | CNN Business.” CNN, 17 May 2024, https://www.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk/index.html.
  2. Hu, Krystal. “ChatGPT Sets Record for Fastest-Growing User Base - Analyst Note.” Reuters, 2 Feb. 2023. www.reuters.com, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/.
  3. CBS News. FBI Warns of Deepfake Videos Ahead of Election Day - CBS News. https://www.cbsnews.com/video/fbi-warns-of-deepfake-videos-ahead-of-election-day/.
  4. FinCEN. 13 Nov. 2024, https://www.fincen.gov/news/news-releases/fincen-issues-alert-fraud-schemes-involving-deepfake-media-targeting-financial.
  5. Nickerson, R. S. (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises.?Review of General Psychology,?2(2), 175-220.?https://doi.org/10.1037/1089-2680.2.2.175
  6. K?bis, N. C., Dolezalová, B., & Soraperra, I. (2021). Fooled twice: People cannot detect deepfakes but think they can. iScience, 24(11), Article 103364. https://pure.uva.nl/ws/files/67787899/1_s2.0_S2589004221013353_main.pdf
  7. OSF. https://osf.io/preprints/psyarxiv/4ms5a.
  8. Hughes, Sean. Deepfakes Can Be Used to Hack the Human Mind | Psychology Today. https://www.psychologytoday.com/us/blog/spontaneous-thoughts/202110/deepfakes-can-be-used-hack-the-human-mind.
  9. Winder, Davey. “Now AI Can Bypass Biometric Banking Security, Experts Warn.” Forbes, https://www.forbes.com/sites/daveywinder/2024/12/04/ai-bypasses-biometric-security-in-1385-million-financial-fraud-risk/.
  10. Entrust Cybersecurity Institute. 2025 Identify Fraud Report. https://www.entrust.com/sites/default/files/documentation/reports/2025-identity-fraud-report.pdf
  11. Shah, Agam. Google’s AI Watermarks Will Identify Deepfakes. 15 May 2024, https://www.darkreading.com/cloud-security/google-ai-watermarks-identify-deepfakes.
  12. Regula. Deepfake Trends 2024. https://static-content.regulaforensics.com/PDF-files/0831-Regula-Deepfake-Research-Report-Final-version.pdf


The information in this newsletter publication was compiled from sources believed to be reliable for informational purposes only. This is intended as a general description of certain types of managed security services, including incident response, continuous security monitoring, and advisory services available to qualified customers through SpearTip, LLC, as part of Zurich Resilience Solutions, which is part of the Commercial Insurance Business of Zurich Insurance Group.? SpearTip, LLC does not guarantee any particular outcome. The opinions expressed herein are those of SpearTip, LLC as of the date of the release and are subject to change without notice. This document has been produced solely for informational purposes. No representation or warranty, express or implied, is made by Zurich Insurance Company Ltd or any of its affiliated companies (collectively, Zurich Insurance Group) as to their accuracy or completeness. This document is not intended to be legal, underwriting, financial, investment or any other type of professional advice. Zurich Insurance Group disclaims any and all liability whatsoever resulting from the use of or reliance upon this document. Nothing express or implied in this document is intended to create legal relations between the reader and any member of Zurich Insurance Group. Certain statements in this document are forward-looking statements, including, but not limited to, statements that are predictions of or indicate future events, trends, plans, developments or objectives. Undue reliance should not be placed on such statements because, by their nature, they are subject to known and unknown risks and uncertainties and can be affected by numerous unforeseeable factors. The subject matter of this document is also not tied to any specific service offering or an insurance product nor will it ensure coverage under any insurance policy. No member of Zurich Insurance Group accepts any liability for any loss arising from the use or distribution of this document. This document does not constitute an offer or an invitation for the sale or purchase of securities in any jurisdiction.

In the United States, Zurich Resilience Solutions managed security services are provided by SpearTip, LLC.

Copyright ? 2025 SpearTip, LLC

Dan Ackerman

Cyber Resilience Architect

1 个月

Scary stuff and something even the best of us may fall for. A key takeaway is to remember a valuable control that is often overlooked. That is the technical control for your people. An effective Security Awareness Training program is a must have for all organizations.

要查看或添加评论,请登录

SpearTip的更多文章

社区洞察

其他会员也浏览了