It’s a Confabulation, not a Hallucination.

It’s a Confabulation, not a Hallucination.

As generative AI continues to evolve, our understanding of how these systems operate and how we describe them must evolve. One term that has gained traction is " AI hallucination" referring to instances when AI generates incorrect or fabricated information. However, a more precise term would be "AI confabulation."

What Are AI Hallucinations?

In AI, the term "hallucination" describes scenarios where a model produces responses that are factually incorrect or entirely made-up. This metaphor, borrowed from human cognition, implies a kind of random and irrational behavior, akin to what a person might experience under the influence of drugs or in a psychotic episode. While this term helps communicate that the AI has produced something wrong, it doesn’t fully capture the nature of the error. Hallucinations, by definition, are perceptual experiences without a basis in reality—random, illogical, and disconnected from context.

Why "Confabulation" is a Better Term

The term "confabulation" originates from psychology and refers to the process by which someone with memory loss invents plausible, coherent, but false memories to fill in gaps. If you have an aging parent, you may have experienced this. The parent may begin to tell a story, then forget how it ends. They may be embarrassed because their memory failed them, so they just make up a logical ending. Sometimes they may even tell someone else's story and claim it as their own.

Confabulations are logical and make sense within the context of a person's experiences and reasoning, even if they are factually incorrect. This is much closer to what generative AI does when it provides inaccurate information.

Generative AI systems, like OpenAI's GPT models, work by predicting the most likely next word or phrase based on patterns learned from massive datasets. When they "hallucinate," they aren't generating random noise. Instead, they're constructing plausible-sounding but incorrect information, driven by the same mechanisms that generate correct answers. Just like a human confabulating a memory, the AI is trying to "fill in the blanks" using the available data but sometimes ends up creating something that isn't true.

Shifting Our Mindset

The shift from "hallucination" to "confabulation" may seem like a small semantic change, but it has profound implications. Using the term "confabulation" emphasizes that AI errors are often not random or disconnected from logic. They follow a predictable path based on the model's understanding of language, structure, and probability, even if that path leads to incorrect conclusions.

Hallucinations are easily identifiable as false.? Confabulations, sound true, but are not.

This distinction helps users—especially those in business or risk-sensitive fields—better understand the risks associated with AI output. If we think of AI as "confabulating" rather than "hallucinating," we acknowledge that it can produce well-formed, logical responses that are nonetheless wrong. This insight can encourage more careful human oversight, especially in applications where the cost of errors is high.

Why This Matters

Generative AI is increasingly being deployed in critical applications—from healthcare to legal services to cybersecurity. In these areas, the risks associated with AI-generated errors can be significant. By framing these errors as "confabulations," we can better prepare users to understand the nature of the mistakes and take appropriate action, such as cross-referencing AI-generated outputs or implementing checks to catch inaccuracies.

Additionally, this shift in terminology can help AI researchers and developers better focus on improving the systems. Understanding that AI errors are logical in structure suggests that solutions can be found in refining models to recognize and correct these confabulations, rather than trying to eliminate random noise.

AI doesn't hallucinate—it confabulates.

It's time to move beyond the term "hallucination" and adopt "confabulation" when discussing AI-generated inaccuracies. This change would provide a clearer, more accurate understanding of the nature of these errors and better reflect how generative AI works. As AI becomes more integrated into business processes, security operations, and daily life, having the right language to describe its behavior is essential for managing risk and maximizing its value.

In short, AI doesn't hallucinate—it confabulates. And understanding that difference is key to unlocking its full potential while keeping its limitations in check.

This shift in perspective will not only improve our approach to AI development but also help align expectations for users and stakeholders, paving the way for safer and more effective AI systems across industries.

About Tim Howard

Tim Howard is the founder of 5 technology firms including Fortify Experts which helps companies create higher-performing teams through:

  • People (Executive Search and vCISO/Advisory consulting),
  • Process (NIST-based security assessments and Leadership Coaching),
  • Technology (Simplifying Security Solutions).

How I can help you:

  1. Join over 30,000 People Getting Free Security Leadership Improvement Advice - Follow me on LinkedIn. www.dhirubhai.net/in/timhoward
  2. If you want to hire a great security leader, download our free ebook on How to Hire a Great CISO .
  3. If you want to quickly assess your Cybersecurity Maturity Level or Need a Strategic Improvement Roadmap, Contact me.
  4. If you are looking to simplify cybersecurity, check out Fortified Desk . The Zero Client secure, instantly deployable, BYOD workspace.
  5. Come be a part of the discussion in our Monthly CISO Forums .


Steve Zalewski

CISO | Advisor | Investor | Speaker

1 个月

I struggle with the underlying concern that we are making this a more complex problem that it really deserves of our time and attention. My simple analogy is that assume we ask a student to write a paper, for example on the impacts of gravity on the earth. The problem is that we have not defined "student". It could be a 4th grader, a high schooler or a PHD candidate. So assumptions are being made on the definition and then when we look at the output of the prompt from the LLM model that has "trained", we have what I call the challenge of "unintended consequences". Our assumption was never validated, so the outcome is called erroneous and we argue over the answer rather than looking at the root cause of misunderstanding.

Matthew Rosenquist

CISO at Mercury Risk. - Formerly Intel Corp, Cybersecurity Strategist, Board Advisor, Keynote Speaker, 190k followers

1 个月

It may be more accurate, but it is a distinction without a practical difference. At the end of the day, the AI system is confidently providing false information in an articulable way, to the detriment of the receipient.

回复

Not a fan of the term hallucination, especially since my undergraduate degree was in psychology but it's become a common way of describing the issue and I would avoid the confusion.

回复
Joan Sanchez

Full AI Automation

1 个月

Interesting

回复
Eugene Neelou

UAE | Pioneer in AI Security & Safety | Looking to join or consult AI / Security startups with cyber and Product-Led Growth strategies | ex-Founder, CTO, CISO, Product Manager, Industry Expert

1 个月

LLM hallucinations might be a debatable term but it's the one most widely adopted and understood -- by both academia and industry as well as by the general public. Talking to AI stakeholders, I'd prefer to focus on finding solutions rather than fighting about the right terminology.

要查看或添加评论,请登录

Tim Howard的更多文章

社区洞察

其他会员也浏览了