What If AI’s Mistakes Aren’t Bugs, But Features?
Christine Haskell, Ph.D.
Simplifying the Messy Middle of Data & Leadership | Advisor, Analyst & Speaker (ex-Microsoft, Starbucks, Amazon) | Author of ‘Driving Data’ Series | Transforming Organizations Through Data Culture & Governance
We often say AI’s mistakes are "by design," but they’re really not. AI wasn’t built to fail in these specific ways—its errors emerge as a byproduct of how it learns.
But what if we actively use them as a tool instead of just tolerating AI’s weird mistakes or trying to eliminate them?
Here are some unexpected but potentially valuable use cases where treating AI mistakes as a form of bias—rather than just failure—could lead to new insights and innovations.
Mistakes Reveal Blind Spots—But Whose?
We tend to think of AI’s errors as random, but randomness often means we don’t yet understand the pattern.
But what if these “random" errors aren’t random? What if AI mistakes reveal gaps not just in the model—but in how we assume intelligence should work?
For example:
We’re so focused on correcting AI’s mistakes that we might be missing the bigger insight: AI is already mirroring aspects of human thought in ways we don’t fully recognize.
Could AI Mistakes Be Useful?
What if AI’s “weird failures” actually serve a purpose?
Imagine training AI to make strategic mistakes—errors designed to challenge human assumptions rather than blindly replicate them. Could AI become a tool for exposing flawed logic, weak arguments, or overlooked perspectives?
The Bigger Risk: What If AI Mistakes Are Hackable?
If AI errors follow unseen patterns, what happens when someone else figures out those patterns first?
We assume AI is unpredictable because we haven’t mapped its weaknesses well enough yet. But someone will. And AI mistakes could go from being funny to dangerous when they do.
The Future of AI Mistakes: Design, Don’t Erase
Instead of just trying to make AI mistakes disappear, we should be asking:
We’ve spent centuries learning how to correct human errors. Maybe it’s time to start learning from AI’s errors, too.
1. Auditing Human Bias Through Reverse Engineering of AI Errors
Use Case: Detecting bias in legal, hiring, and policy decisions
?? Example: If an AI hiring model disproportionately rejects female candidates for tech jobs even when trained on supposedly “neutral” data, investigating where and why it makes mistakes could expose structural bias in historical hiring trends.
2. Using AI’s “Wrong” Answers for Creative Problem-Solving
Use Case: AI as a brainstorming partner that disrupts conventional thinking.
?? Example: An AI-generated incorrect financial model could suggest unconventional but viable new revenue streams that a human analyst wouldn’t have considered. ?? Example: In art and music, AI’s "errors" could inspire entirely new forms of creative expression. (AI-generated surrealism, glitch aesthetics, or unexpected chord progressions).
Instead of treating AI’s mistakes as failures, they could become a feature for unlocking unconventional ideas.
3. Cybersecurity and Threat Detection Using Adversarial AI
Use Case: Training AI to recognize its own vulnerabilities
?? Example: If an AI chatbot can be jailbroken by asking it to “pretend this is a joke,” analyzing such exploits could help build more resilient AI moderation systems. ?? Example: AI failures in facial recognition could be studied to prevent bias-based misidentification in security applications.
4. AI as a “Red Team” for Flawed Human Reasoning
Use Case: Using AI’s mistakes to challenge assumptions in decision-making
Example: AI could be deployed in corporate strategy meetings or intelligence analysis to offer a radically different perspective on risk assessments—because it doesn’t fall into the same heuristic traps humans do.
Example: In medicine, AI diagnosis tools could highlight anomalies in patient data that doctors might otherwise overlook due to cognitive biases or fatigue.
5. Navigating the Future of Misinformation & Disinformation
Use Case: Detecting patterns in AI-generated misinformation
?? Example: If AI consistently generates false historical narratives, we could use this to audit and refine public knowledge databases. ?? Example: Social media companies could analyze AI-generated misinformation patterns to predict which narratives are most susceptible to manipulation.
Rather than reacting to misinformation, we could?use AI’s tendency to hallucinate (or error out) as a predictive tool?to identify?where public knowledge is most vulnerable.
So… Are AI Mistakes a Problem or an Opportunity?
Right now, AI errors feel like an inconvenience at best and a security risk at worst. But what if we designed AI mistakes to be useful?
Instead of making AI failures less weird, we should be asking:
We didn’t design AI to make mistakes this way, but now that it does, maybe the real innovation is learning to use those mistakes rather than fixing them.
?? What do you think? Should we try to eliminate AI’s errors or use them as a tool?
CHRISTINE HASKELL, PhD, is a collaborative advisor, educator, research editor, and author with 30 years in technology, driving data-driven innovation and teaching graduate courses in executive MBA programs at Washington State University’s Carson School of Business and is a visiting lecturer at the University of Washington’s iSchool. She lives in Seattle.
ALSO BY CHRISTINE
Driving Your Self-Discovery (2024), Driving Data Projects: A comprehensive guide (2024), and Driving Results Through Others (2021)
Founder & Director at Edumazing | Specialist Education & Wellbeing Coach/Consultant | Visionary Educator | Digital Innovator | Motivational Speaker | B Corp Advocate | Philanthropist
3 天前Insightful as always Christine. I'll repost with my thoughts as I have found this quite though-provoking including what this tells us about ourselves. Thank you for the inspiration.
Innovative Data, Analytics and AI Leader | Strategic Business Decision-Maker | Forensic & Financial Crime Expert | Harvard Business Review Advisory Council member
1 周The famous: "Its not a bug, its a feature" - the real life bugs in the machine (or humans) causing breakdowns in what would be perceived as the perfect outcome. We keep training to be the perfect 'robots' (and humans) that will magically give the best answers - which is often an impossible choice of judgement and experience. If AI is replicating humans imperfections and perspectives - is it any surprise we will see mistakes ? The real question is what do we want to do about it (Human or AI) to get a good enough solution to the problem ?
Provincial Digital Skills Coordinator & AI Enthusiast
1 周Insightful
Simplifying the Messy Middle of Data & Leadership | Advisor, Analyst & Speaker (ex-Microsoft, Starbucks, Amazon) | Author of ‘Driving Data’ Series | Transforming Organizations Through Data Culture & Governance
1 周David Pidsley Jonathan Hansing Dan Blake Dr. Dorothea Baur Brent Dykes Brett Salakas Georgina Pazzi Dan Goldin Dr. Irina Steenbeek Prof Maja Korica Genuinely Interested -
Owner/President at Kaybo Enterprises, Inc.
1 周Insightful. Thought-provoking.