AI-Fueled Videos Drag Angel Reese Through a False Doping Scandal

AI-Fueled Videos Drag Angel Reese Through a False Doping Scandal

In the world of women’s college basketball, Caitlin Clark and Angel Reese have captured the public’s imagination with their competitive fire. Their on-court rivalry transcends sports, turning them into cultural figures who exemplify ambition, excellence, and unapologetic authenticity.

When LSU beat Iowa in the 2023 NCAA championship, Reese’s playful taunt—a "you can't see me" gesture directed at Clark—sparked a media frenzy. Some critics praised it as gamesmanship; others decried it as unsportsmanlike behavior. The controversy even reached the White House, igniting debates about double standards, sportsmanship, and representation in women’s sports.


A couple days after the game, Clark said, “I don’t think Angel should be criticized at all. I’m just one that competes, and she competed. I think everybody knew there was going to be a little trash talk in the entire tournament. It’s not just me and Angel.”

But what began as a spirited rivalry soon spiraled beyond reality, plunging into a web of AI-generated misinformation. As the rivalry gained traction during the WNBA season, AI content creators seized the opportunity to fabricate stories of conflict and scandal. When the season ended, they sustained the engagement by spinning a sensational doping scandal involving Reese. (Rumors also swirled that Caitlin Clark had left the WNBA to play in Europe.) In this fabricated reality, Reese’s basketball career unraveled into chaos—complete with steroid accusations, team bans, lawsuits, and a public apology.

The Fabricated Fall: When AI Creates Its Own Reality


AI-generated videos and articles began to flood social media, particularly on platforms like YouTube and TikTok. Content creators used AI language models to script the scandal, synthesized voiceovers to narrate it, and AI-powered video editing tools to produce hundreds of videos. Each video pushed the fictional narrative further, weaving tales of Reese’s alleged doping scandal with stunning precision—tricking casual viewers into believing the story was real.

The false narrative became so compelling that it mimicked the structure of breaking news cycles:

  • Stage 1: The discovery of Reese’s supposed steroid use.
  • Stage 2: Her removal from the team and subsequent lawsuit by LSU.
  • Stage 3: A fictional public apology, followed by emotional interviews.

As the story evolved, more AI-generated videos emerged, each contributing to the illusion of a continuous scandal. The lines between reality and fiction blurred, creating a parallel universe fueled by engagement metrics, not truth.

The AI-Powered Content Machine


Content creators are leveraging a streamlined suite of AI tools to produce these false stories at breakneck speed. This process has become disturbingly efficient, flooding the internet with misleading content:

  • AI Language Models: Generate scripts from prompts, fabricating a narrative around Reese’s alleged scandal.
  • AI Voice Synthesis: Provides human-like voiceovers for the scripts, making the false content sound convincing.
  • AI Video Editing Tools: Seamlessly combine stock footage, recycled images, and synthesized voiceovers to create misleading videos.
  • Clickbait Thumbnails and Titles: Generated by AI to attract clicks, engagement, and viral traction.

This seamless integration of AI technologies creates a powerful feedback loop of misinformation, preying on public curiosity for scandal. As sensational content gains traction, the incentives of engagement-driven platforms like YouTube reward creators for quantity over quality—turning falsehoods into profitable content.

Shallow Fakes, Not Deepfakes

What makes this phenomenon even more troubling is that these videos don’t rely on sophisticated deepfake technology. Instead, they belong to the category of shallow fakes. Unlike deepfakes, which require complex AI to mimic voices or create hyper-realistic video content, shallow fakes use existing video clips and images, recontextualized with false narratives.

The intent isn’t to fool viewers into believing they’re watching newly created or authentic footage. Instead, these videos layer AI-generated voiceovers and deceptive storytelling over familiar content, presenting falsehoods as plausible truths. This approach exploits the viewer’s familiarity with real footage, turning familiarity into a tool for manipulation.

The Danger of Misinformation by Design

The rise of shallow fakes highlights a shift in misinformation tactics. They don't rely on technological sophistication but instead exploit speed, scale, and plausibility. These false narratives spread rapidly, overwhelming audiences and making it increasingly difficult to distinguish between real events and fabricated ones.

This trend also reveals the vulnerabilities inherent in today's engagement-driven platforms. As misinformation becomes easier to produce and harder to detect, the erosion of trust in media and public discourse accelerates. The implications extend beyond individual scandals—undermining trust in institutions, public figures, and even democratic processes.

As we navigate this new reality, it’s crucial to rethink how we approach information ecosystems. Without proactive strategies to counter misinformation, the spread of shallow fakes threatens to deepen societal divides and degrade our collective ability to discern truth from fiction.

The Incentive Structure That Propels Misinformation

The monetization systems of platforms like YouTube have made this kind of misinformation lucrative. AI-generated content thrives in an ecosystem driven by views, clicks, and subscriptions. The incentives are simple: the more sensational the content, the more engagement it generates, and the more money flows to creators. False scandals and fabricated controversies become profitable commodities in the digital economy.

With YouTube’s monetization tools—ad revenue, channel memberships, and Super Chats—content creators found financial motivation to continue expanding Reese’s fictional downfall. The scandal became a high-performing keyword, ensuring that AI-generated videos related to it would dominate recommendation algorithms and keep users hooked. Truth took a back seat to metrics.

The Consequences of a "Dead Web"


The fake Angel Reese doping scandal exemplifies what could be termed the "Dead Web"—a landscape filled with artificial, misleading, and harmful content. In this ecosystem, misinformation spreads unchecked, echo chambers grow, and public trust erodes. The stakes are high:

  • Trust in Institutions: When even stories become corrupted by falsehoods, public trust in institutions—including media and sports organizations—crumbles.
  • Overwhelming Users: The sheer volume of AI-generated misinformation leaves users struggling to separate fact from fiction, contributing to a growing sense of confusion and fatigue.
  • Erosion of Human Connection: As false narratives proliferate, authentic interactions give way to suspicion, and the very concept of shared reality becomes tenuous.

Moving Forward: Information Reset and Recalibration


The rise of AI-generated content makes it clear that our current information ecology is unsustainable. If we want to avoid falling deeper into a web of misinformation and manipulation, we need a great information reset—a recalibration that restores balance to the digital landscape. Platforms must prioritize truth, context, and transparency over engagement metrics, and users must develop stronger digital literacy skills to navigate this increasingly complex environment.

The Metawebas described in the book, The Metaweb: The Next Level of Internetoffers a glimpse of a solution: a new layer above the web that ensures authenticity, encourages collaboration, and fosters trust. Through tools like Bridges—which connect related content across silos—and decentralized authentication systems, we can create a healthier, more transparent information ecosystem.

In this vision, AI containment plays a crucial role, ensuring that bots and AI systems serve human interests without distorting conversations or spreading false narratives. The Metaweb represents an opportunity to rebuild the web as a space where truth prevails, fostering meaningful connections across cultures and communities.

A New Era of Digital Trust


The fake Angel Reese doping videos illustrates both the power and the peril of narratives in the digital age. What starts as a playful gesture on the basketball court can spiral into a fabricated scandal, amplified by algorithms and AI. It’s a reminder that our digital spaces need stronger governance, more accountability, and healthier information flows.

As we move forward, it’s time to reimagine the internet as a place where human agency, truth, and collaboration are protected. The Meta-Layer offers one path forward—a way to rebuild trust and foster authentic engagement in a world increasingly shaped by artificial content.

The question remains: Will we seize this moment to reset and recalibrate our information systems, or will we allow a Dead Web rife with misinformation to shape our collective reality? The choice is ours.

Thanks for reading Probably Nothing to See Here! Subscribe for free to receive new posts and support my work.


Solution on the Horizon: The Meta-Layer initiative


The fabricated Angel Reese doping scandal illustrates both the power and peril of narratives in the digital age. It’s a reminder that our digital spaces need stronger governance, greater accountability, and healthier information flows. But how do we achieve this in a way that fosters collaboration without introducing further centralization or control?

This is where the Metaweb offers a glimpse of hope. The Metaweb, as described in The Metaweb: The Next Level of Internet, envisions a new space above today’s web—one built to enhance transparency, authenticity, and connection. It’s not just a redesign of existing platforms, but a new internet architecture that fosters shared realities and empowers people to engage meaningfully across fragmented digital spaces.

The Meta-Layer initiative is the first step toward bringing this vision to life—a project to build the application substrate necessary to make the Metaweb real. The initiative aims to create an open, decentralized meta-layer on top of today’s web that provides new contexts for interaction, collaboration, and truth.

By developing Meta-Layer Bridges, we can connect isolated sources of information, fostering collaboration and the creation of healthier information ecologies. Through overlay applications, smart tags, and meta-communities, participants will be able to interact in new ways, beyond the limitations of today’s platforms.

At the heart of the Meta-Layer is the principle of decentralized governance—ensuring that participants retain privacy, data sovereignty, and human agency. AI containment protocols will further protect against the misuse of AI to manipulate conversations or distort the truth, helping to rebuild trust in digital ecosystems.

Building the Next Level of the Internet

The Meta-Layer initiative is more than a technical project; it is an opportunity to reclaim control over our digital lives. By building a space where interactions occur with mutual accountability, it ensures that the web becomes a place for authentic connections and meaningful collaboration.

The Metaweb isn’t just an abstract concept—it’s a practical solution for the challenges facing the digital world today. And the Meta-Layer initiative invites anyone committed to these values to join us in creating this new internet infrastructure.

Together, we can build a web that reflects the needs of its users—one that is safe, empowering, and open to all.

Join the Conversation and Help Shape the Metaweb

The Meta-Layer initiative represents the first tangible step toward the next level of the internet, and we need your input to make it a reality. Join the discussion, provide feedback, and help us define the desirable properties of the Metaweb at: bridgit.io/meta-layer.

要查看或添加评论,请登录

Daveed Benjamin的更多文章

社区洞察

其他会员也浏览了