Media Manipulation in the AI Era

Media Manipulation in the AI Era

The Repercussions of Factual and Perceived AI Alterations:

I’ve been deeply engrossed (for better and worse) in monitoring the events unfolding in the Middle East. On a personal level, it is both sad and shocking. On a professional level, I have been using muscle that has been built over many years being a prolific user of social media, as well as someone who has worked in communications. I know how rampant disinformation is online and have a good idea of how to spot it (this is a mix of drilling down to the source and looking for credible references that are either verifying or denying the media). That said, it’s a complex process and not an exact science. One of the bigger observations from doing a lot of hyper-media monitoring from these past few weeks is that much, if not the majority, of disinformation tactics are not all that high-tech. Much of the disinformation spread across multiple social networks and noticeably, X is arguably the least technically sophisticated way of manipulating media—recycled footage. From Rolling Stone:

Viral clips purporting to depict the current crisis have included missile strike footage of the Syrian War in 2020 , footage of Egyptian troops paragliding over Cairo (there are confirmed reports that Hamas militants entered Israel with paragliders), and even footage of Bruno Mars fans running towards the stage at one of his concerts, which was falsely presented as video of Israelis fleeing a massacre by Hamas forces at a music festival outside of Gaza. Videos of casualties being pulled from the rubble , as well as troop mobilizations from past military conflicts, have also been circulated online.

Still, instinctively, I feel that AI combined with supercharged and super-viral platforms like TikTok, as well as a less regulated Twitter (X), will likely pour gasoline on the fire that we know is media disinformation today. It’s also not just about lowering the friction it takes to create media such as images or videos, which text-to-media prompting has made Generative AI the game change that it is—media manipulation in the AI era is already adding to the confusion because we may be looking at a legitimate piece of media, assuming that it may be AI-generated when it it is actually real. This is precisely what happened in the first week during the Isreali-Hamas war (and media war). Conservative personality Ben Shapiro shared an image of a charred infant with his millions of followers, and soon after, disinformation was spread that the image was AI-generated when, in fact, it was authentic :

“Ben Shapiro used an AI generated image to try and whip people into pro-war frenzy,” one user wrote. “Falsifying evidence and outright lies like this is exactly how we got into the Iraq war. Thankfully this time we have the technology to call out their lies.”
Screen Capture from

The above underscores where AI detection tools are falling short and what the consequences are when they do. Also, things have not been much more reliable over on the LLM front:

And the confusion between truth and falsehood is not the property of laypeople only. Google's Bard and Microsoft's Bing chatbots, for example, claimed last Wednesday that Israel and Hamas had reached a ceasefire. Although Google and Microsoft warn that their tools are experimental and not yet accurate, they are already incorporating AI content into their search results, so the effort to know what is really real becomes even more never-ending.

A precedent has now been set that what you are looking at might be AI generated—because an AI tool said it was. The unfolding war in the Middle East underscores just how high the stakes are when it comes to validating or invalidating whether media is authentic and credible or has been generated or manipulated.

Humans In The Loop: Truth Seekers Wanted

So, we have to assume that the combination of the lowest tech and the highest, involving AI, now becomes a part of the media landscape we consume and interact with. In AI circles especially—it’s common to use the phrase “human in the loop” to underscore that somewhere in the middle of all of that data input, machine learning, and neural network development are human beings checking, double checking, and validating what the machines are doing. In the case of media manipulation and AI's role in either generating it, identifying it, or misidentifying it, humans have never become more essential, just as the truth has never become more obscure. Humans working with machines will be the truth seekers we need—not one or the other, but both.

Jenny Nicholson

?? Making Magic with Machines at Queen of Swords ??

1 年

This is exactly why I think AI gives web3 the use case it was looking for. There's NO WAY to track and flag disinformation. Instead, news outlets are going to need a method to verify the provenance and content of their stories. Public, trackable, immutable -- this is precisely why blockchain tech exists. I hope the AP and every other major news outlet is already working on this, because the world is gonna need it.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了