When generated images take on a life of their own
(Image credit below)

When generated images take on a life of their own

(Photo by Cullan Smith on Unsplash.)

(This LinkedIn article is mirrored from the post on my website.)


This article certainly raises an eyebrow:?

"Fake images of Trump arrest show ‘giant step’ for AI’s disruptive power" (Washington Post)

We've gone from "use AI to explain the world" to "use AI to manufacture reality."


Three points related to #aisafety stand out here:

1/ Fuel meets fire.

Deepfakes and other synthetic content are hardly new. But this (not-really-)Trump image serves as a harsh lesson.? Faked content of very popular and/or contentious people, places, and events can be highly flammable material.

(You don't even need AI to cause this kind of trouble. Consider a 2013 tweet, sent from the (hacked) AP Twitter account. That one-liner – a false story about an incident at the White House – had an immediate impact on financial markets.)


2/ As the phrase goes, "a picture is worth a thousand words."??

While we're all so enamored with ChatGPT's ability to generate text, let's remember that a synthetic image can spread faster and wider than a synthetic article on the same topic.?

Images don't have language barriers and can be absorbed very quickly, so they tend to evoke a faster emotional reaction than text.??

Note that the creator of the fake Trump image, Eliot Higgins, had clearly marked it as fake when he published it.? But that warning label fell away as the visual spread.


3/ If you thought content moderation was difficult before ...?

Social media platforms employ a mix of AI tools and human reviewers to spot problematic content. And that system is already a creaky cat-and-mouse game, at best.?

What was once a matter of AI-Plus-Human Platform Moderation versus Human Actors is about to become AI-Plus-Human versus AI-Plus-Human. And it promises to be ugly. AI-generated fake content, spread by the emotional reactions of humans, could overwhelm existing moderation approaches and require the creation of new techniques.s.??

(I'll skip my usual "this is similar to the early days of algorithmic trading" line, but you know it's true.)


Where do we go from here?

One subtle lesson here is that Higgins did not set out to cause trouble.? He made the image in jest and labeled it accordingly, yet it still took on a life of its own..??

What happens when the intended effect is to cause chaos? It’s clear that social media platforms don’t have the protections in place to stop the spread.

What’s worse is, I think they’re some ways away from finding a solution.

要查看或添加评论,请登录

Q McCallum的更多文章

  • My favorite writing from 2024

    My favorite writing from 2024

    I'm sharing a list of pieces I really enjoyed writing in 2024. (The list of what I enjoyed reading would run for ages…

  • Measuring the wrong thing

    Measuring the wrong thing

    (Image credit: patricia serna on Unsplash) I never thought I'd have quite this much to say about metrics, but after…

  • Same name, new face for AI

    Same name, new face for AI

    (Image credit: Dawn Low.) Lately I've seen several articles like this one, about this hot "AI" field that's getting all…

    2 条评论
  • Congratulations, you are now a data company

    Congratulations, you are now a data company

    (Photo by Danist Soh on Unsplash) I wrote a lot of software for companies earlier in my career. One lesson I learned…

    1 条评论
  • When your metrics are fooling you

    When your metrics are fooling you

    Photo by Dillon Wanner on Unsplash Following my posts on metrics and companies drifting into autopilot, today I have a…

  • Is your company on autopilot?

    Is your company on autopilot?

    (Photo by Cédric Dhaenens on Unsplash) Do you have a minute? Look around your company and ask yourself: "Why are we…

  • When good metrics are bad

    When good metrics are bad

    For all the value they bring, business metrics can also lead to trouble. There are the people who rely too much on…

    2 条评论
  • The top failure modes of an ML/AI modeling project (Part 2)

    The top failure modes of an ML/AI modeling project (Part 2)

    Someone once told me that risk management is a matter of asking "What are you worried about? And what are you going to…

  • The top failure modes of an ML/AI modeling project (Part 1)

    The top failure modes of an ML/AI modeling project (Part 1)

    The good part about machine learning (ML): you can build a model to automate document classification, pricing…

    3 条评论
  • When your ML model is living in the past

    When your ML model is living in the past

    This screen cap is from a newsletter I was working on a couple of days ago (January 2023): The Google Docs grammar…

    2 条评论

社区洞察

其他会员也浏览了