What If AI’s Mistakes Aren’t Bugs, But Features?
IMG Credit: Adobe Firefly

What If AI’s Mistakes Aren’t Bugs, But Features?

We often say AI’s mistakes are "by design," but they’re really not. AI wasn’t built to fail in these specific ways—its errors emerge as a byproduct of how it learns.

But what if we actively use them as a tool instead of just tolerating AI’s weird mistakes or trying to eliminate them?

Here are some unexpected but potentially valuable use cases where treating AI mistakes as a form of bias—rather than just failure—could lead to new insights and innovations.


Mistakes Reveal Blind Spots—But Whose?

We tend to think of AI’s errors as random, but randomness often means we don’t yet understand the pattern.

  • Humans make predictable mistakes—we forget things when we’re tired, miscalculate under stress, and struggle outside our expertise.
  • AI, on the other hand, makes mistakes in ways that seem unrelated to knowledge or fatigue. It might answer a complex math problem wholly and correctly misunderstand a basic fact about the world.

But what if these “random" errors aren’t random? What if AI mistakes reveal gaps not just in the model—but in how we assume intelligence should work?

For example:

  • AI favors familiar answers, sometimes repeating common names or places instead of unknown ones. This may seem like a failure, but isn’t it just a digital version of human cognitive bias (like the availability heuristic)?
  • AI’s sensitivity to phrasing (subtle wording changes can completely change its answer) isn’t too different from how humans respond to survey leading questions.
  • AI models sometimes “hallucinate” facts, making up research papers that don’t exist. But is that really stranger than human overconfidence, where we swear we remember something that never happened?

We’re so focused on correcting AI’s mistakes that we might be missing the bigger insight: AI is already mirroring aspects of human thought in ways we don’t fully recognize.

Could AI Mistakes Be Useful?

What if AI’s “weird failures” actually serve a purpose?

  • Forcing us to rethink assumptions – If AI makes a shocking mistake, is it because the AI is wrong or because we never questioned that assumption in the first place?
  • Challenging bias – AI’s pattern-driven errors might highlight biases in our reasoning that we take for granted.
  • Encouraging more robust systems – AI’s unpredictability forces better human oversight, which may be good.

Imagine training AI to make strategic mistakes—errors designed to challenge human assumptions rather than blindly replicate them. Could AI become a tool for exposing flawed logic, weak arguments, or overlooked perspectives?

The Bigger Risk: What If AI Mistakes Are Hackable?

If AI errors follow unseen patterns, what happens when someone else figures out those patterns first?

  • We already know AI can be jailbroken with social engineering tricks—can those same techniques be used to subtly manipulate AI into making specific, exploitable mistakes?
  • Could adversaries deliberately insert flawed training data to make AI unreliable in critical areas?
  • AI mistakes might not just be accidental—they could become the next cybersecurity threat, manipulated in ways we don’t yet understand.

We assume AI is unpredictable because we haven’t mapped its weaknesses well enough yet. But someone will. And AI mistakes could go from being funny to dangerous when they do.

The Future of AI Mistakes: Design, Don’t Erase

Instead of just trying to make AI mistakes disappear, we should be asking:

  • Which mistakes should AI be allowed to make?
  • How can we design AI to fail in ways that expose its limitations rather than conceal them?
  • Can AI mistakes become tools for better thinking rather than just obstacles?

We’ve spent centuries learning how to correct human errors. Maybe it’s time to start learning from AI’s errors, too.

1. Auditing Human Bias Through Reverse Engineering of AI Errors

Use Case: Detecting bias in legal, hiring, and policy decisions

  • AI’s errors are not random—they reflect patterns in its training data.
  • Suppose an AI model?consistently hallucinates facts?or distorts certain types of information (e.g., making more mistakes about certain demographics). In that case, it might reveal hidden biases in the original data source.
  • Instead of fixing the AI, we could study its failure patterns to expose systemic bias in human decision-making.

?? Example: If an AI hiring model disproportionately rejects female candidates for tech jobs even when trained on supposedly “neutral” data, investigating where and why it makes mistakes could expose structural bias in historical hiring trends.

2. Using AI’s “Wrong” Answers for Creative Problem-Solving

Use Case: AI as a brainstorming partner that disrupts conventional thinking.

  • Human ideation is limited by experience and expectation. AI, however, doesn’t "think" like we do—it can make unexpected connections because it lacks common sense.
  • AI’s mistakes could be used deliberately in creative industries, where lateral thinking is valuable.

?? Example: An AI-generated incorrect financial model could suggest unconventional but viable new revenue streams that a human analyst wouldn’t have considered. ?? Example: In art and music, AI’s "errors" could inspire entirely new forms of creative expression. (AI-generated surrealism, glitch aesthetics, or unexpected chord progressions).

Instead of treating AI’s mistakes as failures, they could become a feature for unlocking unconventional ideas.

3. Cybersecurity and Threat Detection Using Adversarial AI

Use Case: Training AI to recognize its own vulnerabilities

  • AI models are already being tricked through adversarial attacks—subtle modifications that cause them to fail in predictable ways.
  • What if we flipped the script and intentionally studied AI failures to make systems more secure?
  • AI’s mistake patterns could reveal which types of attacks are most effective, allowing developers to defend against future threats proactively.

?? Example: If an AI chatbot can be jailbroken by asking it to “pretend this is a joke,” analyzing such exploits could help build more resilient AI moderation systems. ?? Example: AI failures in facial recognition could be studied to prevent bias-based misidentification in security applications.

4. AI as a “Red Team” for Flawed Human Reasoning

Use Case: Using AI’s mistakes to challenge assumptions in decision-making

  • AI sees the world differently from humans—not because it’s smarter, but because it lacks human cognitive shortcuts.
  • We can deliberately compare human vs. AI mistakes to expose flawed reasoning in high-stakes environments.

Example: AI could be deployed in corporate strategy meetings or intelligence analysis to offer a radically different perspective on risk assessments—because it doesn’t fall into the same heuristic traps humans do.

Example: In medicine, AI diagnosis tools could highlight anomalies in patient data that doctors might otherwise overlook due to cognitive biases or fatigue.

5. Navigating the Future of Misinformation & Disinformation

Use Case: Detecting patterns in AI-generated misinformation

  • AI hallucinations don’t happen randomly—they follow patterns based on gaps in training data.
  • Instead of fixing hallucinations, we could map their frequency and types to track emerging misinformation risks.

?? Example: If AI consistently generates false historical narratives, we could use this to audit and refine public knowledge databases. ?? Example: Social media companies could analyze AI-generated misinformation patterns to predict which narratives are most susceptible to manipulation.

Rather than reacting to misinformation, we could?use AI’s tendency to hallucinate (or error out) as a predictive tool?to identify?where public knowledge is most vulnerable.

So… Are AI Mistakes a Problem or an Opportunity?

Right now, AI errors feel like an inconvenience at best and a security risk at worst. But what if we designed AI mistakes to be useful?

Instead of making AI failures less weird, we should be asking:

  • What are AI’s mistakes revealing about human systems we assume are "correct"?
  • How can AI’s failure patterns be used to drive innovation, expose bias, and enhance security?
  • Could we build AI systems where mistakes aren’t just tolerated—but strategically leveraged?

We didn’t design AI to make mistakes this way, but now that it does, maybe the real innovation is learning to use those mistakes rather than fixing them.

?? What do you think? Should we try to eliminate AI’s errors or use them as a tool?


CHRISTINE HASKELL, PhD, is a collaborative advisor, educator, research editor, and author with 30 years in technology, driving data-driven innovation and teaching graduate courses in executive MBA programs at Washington State University’s Carson School of Business and is a visiting lecturer at the University of Washington’s iSchool. She lives in Seattle.

ALSO BY CHRISTINE

Driving Your Self-Discovery (2024), Driving Data Projects: A comprehensive guide (2024), and Driving Results Through Others (2021)

Georgina Pazzi

Founder & Director at Edumazing | Specialist Education & Wellbeing Coach/Consultant | Visionary Educator | Digital Innovator | Motivational Speaker | B Corp Advocate | Philanthropist

3 天前

Insightful as always Christine. I'll repost with my thoughts as I have found this quite though-provoking including what this tells us about ourselves. Thank you for the inspiration.

Dan Blake

Innovative Data, Analytics and AI Leader | Strategic Business Decision-Maker | Forensic & Financial Crime Expert | Harvard Business Review Advisory Council member

1 周

The famous: "Its not a bug, its a feature" - the real life bugs in the machine (or humans) causing breakdowns in what would be perceived as the perfect outcome. We keep training to be the perfect 'robots' (and humans) that will magically give the best answers - which is often an impossible choice of judgement and experience. If AI is replicating humans imperfections and perspectives - is it any surprise we will see mistakes ? The real question is what do we want to do about it (Human or AI) to get a good enough solution to the problem ?

Vanessa Willemse

Provincial Digital Skills Coordinator & AI Enthusiast

1 周

Insightful

Christine Haskell, Ph.D.

Simplifying the Messy Middle of Data & Leadership | Advisor, Analyst & Speaker (ex-Microsoft, Starbucks, Amazon) | Author of ‘Driving Data’ Series | Transforming Organizations Through Data Culture & Governance

1 周
回复
Karen Johnson

Owner/President at Kaybo Enterprises, Inc.

1 周

Insightful. Thought-provoking.

要查看或添加评论,请登录

Christine Haskell, Ph.D.的更多文章