When generated images take on a life of their own
(Photo by Cullan Smith on Unsplash.)
(This LinkedIn article is mirrored from the post on my website.)
This article certainly raises an eyebrow:?
We've gone from "use AI to explain the world" to "use AI to manufacture reality."
Three points related to #aisafety stand out here:
1/ Fuel meets fire.
Deepfakes and other synthetic content are hardly new. But this (not-really-)Trump image serves as a harsh lesson.? Faked content of very popular and/or contentious people, places, and events can be highly flammable material.
(You don't even need AI to cause this kind of trouble. Consider a 2013 tweet, sent from the (hacked) AP Twitter account. That one-liner – a false story about an incident at the White House – had an immediate impact on financial markets.)
领英推荐
2/ As the phrase goes, "a picture is worth a thousand words."??
While we're all so enamored with ChatGPT's ability to generate text, let's remember that a synthetic image can spread faster and wider than a synthetic article on the same topic.?
Images don't have language barriers and can be absorbed very quickly, so they tend to evoke a faster emotional reaction than text.??
Note that the creator of the fake Trump image, Eliot Higgins, had clearly marked it as fake when he published it.? But that warning label fell away as the visual spread.
3/ If you thought content moderation was difficult before ...?
Social media platforms employ a mix of AI tools and human reviewers to spot problematic content. And that system is already a creaky cat-and-mouse game, at best.?
What was once a matter of AI-Plus-Human Platform Moderation versus Human Actors is about to become AI-Plus-Human versus AI-Plus-Human. And it promises to be ugly. AI-generated fake content, spread by the emotional reactions of humans, could overwhelm existing moderation approaches and require the creation of new techniques.s.??
(I'll skip my usual "this is similar to the early days of algorithmic trading" line, but you know it's true.)
Where do we go from here?
One subtle lesson here is that Higgins did not set out to cause trouble.? He made the image in jest and labeled it accordingly, yet it still took on a life of its own..??
What happens when the intended effect is to cause chaos? It’s clear that social media platforms don’t have the protections in place to stop the spread.
What’s worse is, I think they’re some ways away from finding a solution.