A New Age of Disinformation

A New Age of Disinformation

As the 2024 election season showed, AI-generated disinformation is a complex and evolving threat. While AI-driven "deepfakes" did not drastically affect recent elections, the danger is real — even if most of us aren’t the direct targets.

Beyond the Obvious: A New Age of Disinformation

When we think of AI disinformation, we often imagine doctored videos of politicians or celebrities. Yet, the real danger lies in fabricated narratives that target specific, often less visible, audiences. According to Oren Etzioni, a renowned AI researcher and head of the nonprofit TrueMedia, “For everything that you actually hear about, there are a hundred that are not targeted at you. Maybe a thousand.” This silent spread of deepfakes and misinformation reveals just how pervasive and impactful AI disinformation has become.

This evolving landscape of deepfakes is about more than just viral, high-profile fakes. It includes misinformation in private social media channels and messaging groups — channels where mainstream fact-checkers have limited reach. Content is crafted to shape perceptions, impact choices, and influence events without ever reaching the broader public eye.

TrueMedia's Mission: Detecting the Undetectable

TrueMedia is tackling this issue head-on, offering a web and API service for identifying fake media. Detection is an incredibly complex task, and while it can’t be fully automated, TrueMedia combines cutting-edge technology with human forensics. Their global network collects and analyzes submissions, generating a “ground truth” dataset that makes detection more reliable than ever before.

Their goals can be summarized in three questions:

  1. How much fake media is out there? Estimating the volume of AI-generated misinformation is challenging, as there’s no simple way to catalog it all.
  2. How many people see it? Platforms track some numbers, but millions of views often go unnoticed on private channels.
  3. What is the impact? This might be the hardest to quantify. How many people make decisions based on what they see in manipulated media? How many votes are swayed?

A Growing Need for Measurement and Innovation

While watermarking and voluntary identification standards for generated media have been introduced, these tools fall short of addressing the whole problem. As Etzioni aptly puts it, "Don’t bring a watermark to a gunfight.” Standards only work when everyone cooperates, and bad actors rarely do.

In the coming years, efforts will need to evolve to measure and counteract AI disinformation better. It’s a daunting task, but there’s hope that by better understanding this problem, we can protect ourselves and our democratic systems from the quiet influence of false narratives.


Discover how tailored mentorship, strategic tech consultancy, and decisive funding guidance have transformed careers and catapulted startups to success. Dive into real success stories and envision your future with us. #CareerGrowth #StartupFunding #TechInnovation #Leadership"

Book 1:1 Session with Avinash Dubey


要查看或添加评论,请登录