Detection solutions prevent the spread of harmful deepfakes
First Analysis quarterly insights: Cybersecurity

Detection solutions prevent the spread of harmful deepfakes

by Howard Smith and Liam Moran

Deepfake creation technology has evolved significantly from the rudimentary face swaps that first allowed everyday users to create low-quality deepfakes in the mid-2010s. Since then, deepfake creators, including bad actors, have developed a variety of creation methods, and the technology continues to evolve rapidly.

Governments, individuals and corporations are eager to find ways to stop malicious deepfakes, given their sometimes enormous monetary and societal costs. Deepfake detection companies address this need. They essentially reverse engineer the deepfake creation process to identify manipulated content.

The criteria for choosing among deepfake detection solutions vary based on use case. We discuss use cases in news media, law enforcement and other governmental functions, banking, and general commerce. Each differs in the level and type of deepfake detection it needs.

We highlight a sample of large technology companies that offer some deepfake detection solutions and highlight some deepfake specialists, including three for which we provide detailed profiles.


Table of contents

Includes discussion of three private companies

  • Growing rapidly, harmful deepfakes extract high monetary and societal costs
  • Deepfake creation models continue to grow in complexity, creating more convincing fakes
  • Combatting malicious deepfakes with detection software
  • Use cases influence buying behavior
  • Some players in the deepfake detection market
  • The truth is out there
  • Cybersecurity index opens wide lead over Nasdaq
  • Cybersecurity M&A: Notable transactions include Talon Cyber Security and Tessian
  • Cybersecurity private placements: Notable transactions include SimSpace and Phosphorus


Growing rapidly, harmful deepfakes extract high monetary and societal costs

Deepfakes are synthetic media generated by artificial intelligence (AI), created either entirely anew or by modifying real content, to produce compelling imitations of reality. They take form in a wide variety of media such as photos, videos, and audio recordings. Deepfakes make it difficult to distinguish fact from fiction. The incidence of deepfakes was 10 times greater in 2023 than in 2022, according to SumSub, an identity verification and fraud prevention company, clear evidence that deepfake creation technology is being used more than ever.

Although most sentiment around deepfakes is negative, the technology can be beneficial. One example is in marketing, where actors and marketers can leverage talent by licensing actors' identities to swiftly and cost-effectively generate advertisements with deepfake technology instead of requiring actors to perform. Another example is using deepfake creation technology to personalize ad content based on individual customer preferences and demographics. Beyond marketing, deepfake technology is increasingly being used for entertainment content such as television shows, movies and podcasts. Deepfake technology is used to manipulate actors' appearances and facial expressions to best fit production needs.

Of course, deepfake technology is often also used to cause harm, a vivid example being unauthorized use of people's likenesses in pornography. In fact, the majority of current deepfake regulation in the United States deals with banning its use for nonconsensual pornography. For purposes of this report, however, the most relevant harmful use of deepfake technology is in creating deepfakes to influence geopolitical events and public policy and to perpetrate fraud. For example, hackers recently created and published a deepfake video of Ukrainian President Volodymyr Zelenskyy urging Ukrainians to lay down their arms in the conflict with Russia. (This deepfake was quickly identified and removed.) In early 2019, a deepfake video of Ali Bongo, president of Gabon, played a role in sparking a military coup there. Many more examples are being found and reported regularly. In the context of fraud, the Federal Trade Commission reported imposter scams resulted in $2.6 billion in losses in 2022, affecting over 36,000 victims. A somewhat well-known example is bad actors who impersonate grandchildren and urgently ask for money from a grandparent. In the corporate world, the CEO of a UK-based energy company received what he thought was a call from its parent company's CEO requesting he have money wired to a Hungarian supplier. The CEO recognized the voice and transferred the funds, not realizing the voice was generated by AI; the money was lost.


Request complete report

Patrick Menzel

Fractional CMO | Keyword Expert | Local SEO & Lead Gen Specialist | Ad Testing & Growth Strategy | API Automation | Startup Scaling | $30M+ Secured in 2024 ??

1 年

I made a comment to a #deepfake company a while back that someone should develop something to ensure the person is present on a zoom meeting, loom, etc. I would think something's being developed and pretty much everyone's going to have to have a subscription in the future. These deepfake companies engineers and developers must be extremely smart. I would hate to have the liability when/if sh*t hits the fan from bad developer, if they were hacked, and/or made a mistake and the learning model. In the future, can you imagine hearing about a company getting a bill while their expert is out to lunch and you're talking to artificial intelligence getting advise at $125+ an hour.. ??

要查看或添加评论,请登录

First Analysis的更多文章

社区洞察

其他会员也浏览了