AI's seductive mirage.
Vaughn Tan
Consultant, author, and speaker focusing on strategy for uncertainty and innovation.
tl;dr: AI systems seem like things which appear seductively like autonomous meaning-making entities but are actually tools which can’t make meaning on their own (for now). Understanding this seductive mirage reveals an important but hard-to-see trap in thinking about AI strategy.
Meaning-making is the crux for thinking about AI.
I’ve been thinking and writing about meaning-making, which is the act of making subjective decisions about the value of things.
Meaning-making is the defining characteristic of human-ness (at least for now). If meaning-making is a uniquely human capability, it follows that meaning-making is key to understanding how we should build effective (= aligned and safe) strategies for investing in, regulating, and working with AI.
As I pointed out previously, AI systems can’t yet make subjective decisions about value on their own — only humans can. And when we recognize that only humans can do meaning-making for now, we’re able to see more clearly what AI systems can do better than humans.
This issue is about an important but hard-to-see obstacle to doing effective AI strategy: the seductive mirage of AI systems, which is that they seem to be good at doing meaning-making when they actually can’t do it at all.
AI systems can’t be truly autonomous.
Making subjective decisions about value is how we figure out what is worth doing or not. Meaning-making is how we choose what to do and become autonomous and self-governing. Entities can only be autonomous and self-governing if they can make their subjective choices of actions on their own. An autonomous entity must have meaning-making capacity. Anything else can only aspire to be a tool to be used by a meaning-making entity.
Though AI systems are capable of producing output that looks like what meaning-making humans can produce (Turing’s imitation game), the appearance of [making outputs that are hard to distinguish from those resulting from subjective choices] is not the same thing as [actually making subjective choices that result in outputs]. One challenging implication of this is: Meaning-making as the distinguishing characteristic of human-ness is not a phenomenon susceptible to falsifiable hypotheses.
Here’s an instructive illustration. Below are two images.
The one on the right is the output of a process of meaning-making which is now thought to have been one of the wellsprings of an entire movement in modern art — “Black Square” (1913/1915) by Kazimir Malevich. The one on the left is a black square that does not have the same meaning-making behind as the one on the right — but you would not be able to tell them apart just by looking at them side by side.
I’ll refrain from going into more detail about this because I’ve already unpacked here and here the logic of why output indistinguishability is a poor way to think about evaluating AI systems. Because machines can’t make meaning at all but humans can, all the meaning-making work in AI is being done by humans (individually or in groups) somewhere along the line.
If this seems like an Implausibly Big Claim, have a look at the table below, especially the parts highlighted in yellow. (The table is from one of my earlier essays on AI’s meaning-making problem.)
So, for now, AI systems should be thought of as tools to be used by humans, not autonomous entities with their own agency (the ability to decide on their own what actions are worthwhile).
Tools and possibilities for action.
Tools work by giving us possibilities for action. A door handle is a tool which makes it possible for the user to take the action of opening a door. Useable and easily perceived action-possibilities become the ones which users most frequently try to use the tool for. But some tools don’t provide the action-possibilities that they seem to provide.
Seductive mirages.
When a tool’s action-possibilities are easily perceived/used but aren’t real, the tool’s users are encouraged to use the tool in those ways even when they don’t work as intended. Fake but easy-to-perceive/use action-possibilities are seductive mirages.
A benign example of a seductive mirage tool is a convincing-looking fake door. It’s benign because the worst that happens is that the user quickly realizes that the door is fake and can adjust behavior accordingly by looking for another entrance to the building.
Other tools with seductive mirages are less benign and more insidious because they create inaccurate user-beliefs whose inaccuracies are not easily or quickly detectable, and which change user- or system-behavior in damaging ways.
Just a few examples:
领英推荐
These are all seductive mirages. Mirages that are seductive are especially problematic because they entice users to believe and act if they are real.
AI’s seductive mirage.
In thinking of AI systems as tools, the seductive mirage is that AI systems are (or can soon be) autonomous, self-governing systems that can make meaning like humans do. This is a mirage because only humans can make meaning (for now), and it is seductive because so much (money, reputation, etc) rides on believing that AI systems already can (or soon will) do anything humans can.
This seductive mirage is insidious and malignant because it is hidden in the gap between the real possibilities of AI systems, the easily perceived/used possibilities of AI systems, and our inadequate understanding of why meaning-making is important and where meaning-making happens in AI systems.
Here’s the logic:
The seductive mirage of AI is that AI systems make us believe that we are closer to matching the full range of human capabilities than they actually are. (Which also plays into the valuation narrative of AI technology companies working on AGI.)
The mirage is more seductive because we don’t pay serious enough attention to what meaning-making is and how deeply intertwined it is with the actions we take.
Why is this important? Because meaning-making is required whenever a task requires “discretion” or a “judgment call.” So meaning-making work is done by an underwriter deciding whether to insure a building project using a new construction method, an entrepreneur choosing what product to focus her startup on, an investment committee structuring an investment-for-equity deal with a startup, a panel of judges ruling on the interpretation of law in a crucial case — and a huge number of other tasks both vital and trivial.
The trap of AI’s seductive mirage.
So, the trap is this: When we don’t recognize that meaning-making is foundational to work and woven into nearly every task and AI systems present the seductive mirage of being able to produce outputs that are indistinguishable from human outputs, it becomes too easy to give this meaning-making work away to AI systems. It’s because the meaning-making work is bundled up with all the other work they’re much better at doing, like data management, data analysis, and rule-following.
So the result of the seductive mirage of AI meaning-making is that it becomes too tempting to design or use AI systems for work which requires meaning-making and ignore/omit the humans who used to do the meaning-making. In other words, outsourcing meaning-making to machines without understanding what meaning-making is, why it is important, and that machines can’t do it at all.
At best this way of thinking about product development and management is suboptimal (e.g., garbage results when prompting name-your-LLM). At worst, it can be disastrous (e.g. automatically flagging potential welfare overpayments and escalating them into debts, causing widespread trauma and suicides among welfare recipients). The phrase, “sleepwalking into a bad situation,” comes to mind. If humans decide that we’re going to surrender subjective decisions about value to non-humans, we should at least do this fully aware that we’re doing it.
To design work that takes best advantage of the respective capabilities of humans and AI systems, we must examine work carefully so that we can unbundle it: separate the meaning-making parts from the other parts that can increasingly be done better by machines.
And to do this, we have to recognize that meaning-making — subjective decisionmaking about value — is essential for understanding the future of work and for understanding how to build good tools for that future.
A new philosophy of product management
In the next few weeks, I’ll wrap up this sequence on meaning-making and AI with a modest proposal for a new philosophy of product management centered on understanding meaning-making and unbundling meaning-making work (leave it to humans) from all the other stuff machines are better at. This is what we need to build good products in a time when machines and tools can’t do the meaning-making work that humans do but are nonetheless magically sophisticated at mimicking human outputs — the present.
I crossposted this article from my biweekly newsletter — if you liked it, consider subscribing.
You could check out my conversation with Charley Johnson about meaning-making in AI. We talk about what meaning-making is, why it is important, how it is misunderstood, and what a new philosophy of product management that engages deeply with meaning-making could look like.
Postscript: In this article, I write about the action-possibilities of tools, focusing on those which are easily perceived/used but fake. A related term is “affordances,” which has slightly different definitions depending on what discipline uses it. In psychology, affordances are the real action-possibilities an object provides to its user (“real” in the sense that if the user chooses to use the action-possibility, the action can actually be accomplished). A door handle only has the “real” action-possibility of opening a door if it is connected to the latch mechanism that secures the door. If not, the affordance is just a mirage. In design research, affordances are the action-possibilities that a user can perceive about an object and actually try to use. In this slightly different definition — emphasizing the perceptibility and useability of the affordance — a working door handle that doesn’t look like a handle or which cannot be reached by the user wouldn’t be considered to have the action-possibility of opening the door.
Want to take a break from your business without breaking your business? Make everyone in your company a Boss, so you can disappear.
2 个月Love it: "the appearance of" [being or doing a thing] "is not the same thing as" [being or doing the thing].
Co-Creator of Strategy & Vision | Developer of Talent | Creator of Interactive Learning Programs & Events | Disruptor of Legacy Human Systems & Culture | Promoter of DEIA
2 个月Great insight! I find myself trying to clumsily explain this all the time but you have so elegantly explained what I stumble my way through in conversations.