AI's seductive mirage.
Machine safety at Incheon (12 November, 2023).

AI's seductive mirage.

tl;dr: AI systems seem like things which appear seductively like autonomous meaning-making entities but are actually tools which can’t make meaning on their own (for now). Understanding this seductive mirage reveals an important but hard-to-see trap in thinking about AI strategy.        

Meaning-making is the crux for thinking about AI.

I’ve been thinking and writing about meaning-making, which is the act of making subjective decisions about the value of things.

Meaning-making is the defining characteristic of human-ness (at least for now). If meaning-making is a uniquely human capability, it follows that meaning-making is key to understanding how we should build effective (= aligned and safe) strategies for investing in, regulating, and working with AI.

As I pointed out previously, AI systems can’t yet make subjective decisions about value on their own — only humans can. And when we recognize that only humans can do meaning-making for now, we’re able to see more clearly what AI systems can do better than humans.

This issue is about an important but hard-to-see obstacle to doing effective AI strategy: the seductive mirage of AI systems, which is that they seem to be good at doing meaning-making when they actually can’t do it at all.

AI systems can’t be truly autonomous.

Making subjective decisions about value is how we figure out what is worth doing or not. Meaning-making is how we choose what to do and become autonomous and self-governing. Entities can only be autonomous and self-governing if they can make their subjective choices of actions on their own. An autonomous entity must have meaning-making capacity. Anything else can only aspire to be a tool to be used by a meaning-making entity.

Though AI systems are capable of producing output that looks like what meaning-making humans can produce (Turing’s imitation game), the appearance of [making outputs that are hard to distinguish from those resulting from subjective choices] is not the same thing as [actually making subjective choices that result in outputs]. One challenging implication of this is: Meaning-making as the distinguishing characteristic of human-ness is not a phenomenon susceptible to falsifiable hypotheses.

Here’s an instructive illustration. Below are two images.

Black squares.

The one on the right is the output of a process of meaning-making which is now thought to have been one of the wellsprings of an entire movement in modern art — “Black Square” (1913/1915) by Kazimir Malevich. The one on the left is a black square that does not have the same meaning-making behind as the one on the right — but you would not be able to tell them apart just by looking at them side by side.

I’ll refrain from going into more detail about this because I’ve already unpacked here and here the logic of why output indistinguishability is a poor way to think about evaluating AI systems. Because machines can’t make meaning at all but humans can, all the meaning-making work in AI is being done by humans (individually or in groups) somewhere along the line.

If this seems like an Implausibly Big Claim, have a look at the table below, especially the parts highlighted in yellow. (The table is from one of my earlier essays on AI’s meaning-making problem.)

4 types of meaning-making that AI systems can’t do — with illustrative examples from AI system use and development.

So, for now, AI systems should be thought of as tools to be used by humans, not autonomous entities with their own agency (the ability to decide on their own what actions are worthwhile).

Tools and possibilities for action.

Tools work by giving us possibilities for action. A door handle is a tool which makes it possible for the user to take the action of opening a door. Useable and easily perceived action-possibilities become the ones which users most frequently try to use the tool for. But some tools don’t provide the action-possibilities that they seem to provide.

Seductive mirages.

When a tool’s action-possibilities are easily perceived/used but aren’t real, the tool’s users are encouraged to use the tool in those ways even when they don’t work as intended. Fake but easy-to-perceive/use action-possibilities are seductive mirages.

A benign example of a seductive mirage tool is a convincing-looking fake door. It’s benign because the worst that happens is that the user quickly realizes that the door is fake and can adjust behavior accordingly by looking for another entrance to the building.

Les Spécialistes

Other tools with seductive mirages are less benign and more insidious because they create inaccurate user-beliefs whose inaccuracies are not easily or quickly detectable, and which change user- or system-behavior in damaging ways.

Just a few examples:

  1. Antimicrobials. Powerful antimicrobials (like antibiotics) can make users believe that the problem of microbial disease has been solved with a low-cost and easy-to-use chemical compound. This changes user behavior in ways that increase the prevalence of disease-causing microbes (e.g., using antibiotics in livestock feed as an alternative to careful herd management and feeding) or create bacteria that circumvent antibiotic treatment (e.g., overuse of antibiotics leading to antibiotic-resistant bacteria). This both reduces the efficacy of antimicrobials and increases the scale/intensity of the problems of disease-causing microbes.
  2. Tactical equipment (firearms, personal armor, weapons, etc). Tactical equipment can make users believe that they have special ability/prowess that they actually don’t have. This changes user behavior to make them more likely to initiate or escalate violent crime (e.g., gun violence in the US). This can reduce the user’s safety and the general safety of the broader community — this phenomenon is particularly well-documented in the context of firearm ownership and access. The seductive mirages of tactical equipment are closely connected to mall ninjas and the aesthetics of tacticool.
  3. Risk management tools (cost-benefit analyses, expected value calculation, etc). Risk management tools frequently make their users believe that they have analyzed and mitigated all unknowns. This changes user behavior so users implement inappropriate management strategies for uncertainty (e.g., when the WHO advised against Covid-19 travel/trade restrictions in February 2020 based on a cost-benefit analysis), or so they conflate risk and uncertainty in developing strategy (what I’ve previously written about as overloading “risk” and appropriating “uncertainty”).

These are all seductive mirages. Mirages that are seductive are especially problematic because they entice users to believe and act if they are real.

AI’s seductive mirage.

In thinking of AI systems as tools, the seductive mirage is that AI systems are (or can soon be) autonomous, self-governing systems that can make meaning like humans do. This is a mirage because only humans can make meaning (for now), and it is seductive because so much (money, reputation, etc) rides on believing that AI systems already can (or soon will) do anything humans can.

This seductive mirage is insidious and malignant because it is hidden in the gap between the real possibilities of AI systems, the easily perceived/used possibilities of AI systems, and our inadequate understanding of why meaning-making is important and where meaning-making happens in AI systems.

Here’s the logic:

  1. AI systems produce outputs that resemble what meaning-making humans can produce, and …
  2. Lack of clarity about meaning-making leads us to incorrectly believe that meaning-making outputs are the same as meaning-making activity, so …
  3. AI systems appear to have affordances that are coterminous with human capabilities. Meaning-making is the most important of these affordances, all of which are easily perceived and used but …
  4. AI systems can’t make meaning on their own (yet) — so meaning-making is a fake AI system affordance that is nonetheless easily perceived/used.

The seductive mirage of AI is that AI systems make us believe that we are closer to matching the full range of human capabilities than they actually are. (Which also plays into the valuation narrative of AI technology companies working on AGI.)

The mirage is more seductive because we don’t pay serious enough attention to what meaning-making is and how deeply intertwined it is with the actions we take.

Why is this important? Because meaning-making is required whenever a task requires “discretion” or a “judgment call.” So meaning-making work is done by an underwriter deciding whether to insure a building project using a new construction method, an entrepreneur choosing what product to focus her startup on, an investment committee structuring an investment-for-equity deal with a startup, a panel of judges ruling on the interpretation of law in a crucial case — and a huge number of other tasks both vital and trivial.

The trap of AI’s seductive mirage.

So, the trap is this: When we don’t recognize that meaning-making is foundational to work and woven into nearly every task and AI systems present the seductive mirage of being able to produce outputs that are indistinguishable from human outputs, it becomes too easy to give this meaning-making work away to AI systems. It’s because the meaning-making work is bundled up with all the other work they’re much better at doing, like data management, data analysis, and rule-following.

So the result of the seductive mirage of AI meaning-making is that it becomes too tempting to design or use AI systems for work which requires meaning-making and ignore/omit the humans who used to do the meaning-making. In other words, outsourcing meaning-making to machines without understanding what meaning-making is, why it is important, and that machines can’t do it at all.

At best this way of thinking about product development and management is suboptimal (e.g., garbage results when prompting name-your-LLM). At worst, it can be disastrous (e.g. automatically flagging potential welfare overpayments and escalating them into debts, causing widespread trauma and suicides among welfare recipients). The phrase, “sleepwalking into a bad situation,” comes to mind. If humans decide that we’re going to surrender subjective decisions about value to non-humans, we should at least do this fully aware that we’re doing it.

To design work that takes best advantage of the respective capabilities of humans and AI systems, we must examine work carefully so that we can unbundle it: separate the meaning-making parts from the other parts that can increasingly be done better by machines.

And to do this, we have to recognize that meaning-making — subjective decisionmaking about value — is essential for understanding the future of work and for understanding how to build good tools for that future.

A new philosophy of product management

In the next few weeks, I’ll wrap up this sequence on meaning-making and AI with a modest proposal for a new philosophy of product management centered on understanding meaning-making and unbundling meaning-making work (leave it to humans) from all the other stuff machines are better at. This is what we need to build good products in a time when machines and tools can’t do the meaning-making work that humans do but are nonetheless magically sophisticated at mimicking human outputs — the present.


I crossposted this article from my biweekly newsletter — if you liked it, consider subscribing.

You could check out my conversation with Charley Johnson about meaning-making in AI. We talk about what meaning-making is, why it is important, how it is misunderstood, and what a new philosophy of product management that engages deeply with meaning-making could look like.


Postscript: In this article, I write about the action-possibilities of tools, focusing on those which are easily perceived/used but fake. A related term is “affordances,” which has slightly different definitions depending on what discipline uses it. In psychology, affordances are the real action-possibilities an object provides to its user (“real” in the sense that if the user chooses to use the action-possibility, the action can actually be accomplished). A door handle only has the “real” action-possibility of opening a door if it is connected to the latch mechanism that secures the door. If not, the affordance is just a mirage. In design research, affordances are the action-possibilities that a user can perceive about an object and actually try to use. In this slightly different definition — emphasizing the perceptibility and useability of the affordance — a working door handle that doesn’t look like a handle or which cannot be reached by the user wouldn’t be considered to have the action-possibility of opening the door.


Kirsten Gibbs ??

Want to take a break from your business without breaking your business? Make everyone in your company a Boss, so you can disappear.

2 个月

Love it: "the appearance of" [being or doing a thing] "is not the same thing as" [being or doing the thing].

回复
Amy Heaton, PhD

Co-Creator of Strategy & Vision | Developer of Talent | Creator of Interactive Learning Programs & Events | Disruptor of Legacy Human Systems & Culture | Promoter of DEIA

2 个月

Great insight! I find myself trying to clumsily explain this all the time but you have so elegantly explained what I stumble my way through in conversations.

要查看或添加评论,请登录

Vaughn Tan的更多文章

  • What to do about uncertainty (that's not risk).

    What to do about uncertainty (that's not risk).

    My old friend Victoria (Tory) Wobber, CPCC pinged me last week. In her line of work, she helps people deal with…

    1 条评论
  • Innovation and "organisation" as a verb

    Innovation and "organisation" as a verb

    Is it possible to design and build organisations that are better at finding new information and making sense of it, and…

    4 条评论
  • AI's missing middle.

    AI's missing middle.

    So much time and money is flowing into AI today, but it’s nearly all focused on building foundational technology:…

    4 条评论
  • The meaningmaking lens

    The meaningmaking lens

    Meaningmaking is a simple concept but one that is counterintuitively powerful in understanding how we think about work…

  • Meaningmaking at work.

    Meaningmaking at work.

    Application failure. I went to Chicago in May to run a workshop on improving the speed and quality of new product…

    6 条评论
  • Sawdust and aluminum shavings

    Sawdust and aluminum shavings

    A few years ago, I wrote three essays on the connections between tradeoffs and strategy: Why exploring tradeoffs makes…

  • Where AI wins.

    Where AI wins.

    The second time I used a Waymo self-driving car was last week, because I’m secretly a techno-optimist under my hardened…

    1 条评论
  • AI's meaning-making problem.

    AI's meaning-making problem.

    Hello friends, This article builds on an earlier piece about meaning-making. I’m sure there’s lots of still-broken…

    5 条评论
  • An end to firefighting

    An end to firefighting

    You stand on the cusp of triumph. After months of meticulous research and cat-herding, your policy team, civil service…

    2 条评论
  • Stressvibetime

    Stressvibetime

    Recently, everything feels like it is simultaneously stuck and moving way too fast. It doesn't feel great.

社区洞察

其他会员也浏览了