Human response to AI-generated content
Assaph Mehr
Product & People Leader | Author & Public Speaker | Mentor & Coach | MAICD | Responsible AI
It’s becoming harder to distinguish between human and AI generated content. As the systems improve, what is the boundary between what is uniquely human to machine? Is there such a limit, or are we kidding ourselves? And are those generative, reasoning AI model actually thinking? Does that include diffusion models? What is thinking anyway?
Experts in any field I’ve spoken with, have a basic curve of reactions to AI content. It’s hauntingly similar to Gartner’s hype cycle curve: “It’s so cool it can do that! Actually, when you look closely it’s utter crap. But then, perhaps I could use it just for some of the boring bits…”
So the question for today is: Why do we react the way we do to the various forms of AI content?
In large part, this is an expansion of my note in my review of Co-Intelligence by Ethan Mollick. This article is trying to delve deeper into the meaning of art, thinking, creating, and the limits of generative AI on the path to artificial general intelligence (AGI).
I'll cover: