Lies, damned lies, and hallucinations
ChatGPT can make mistakes. Check important info. - Image AI created by the Author

Lies, damned lies, and hallucinations

<TL;DR> Explore the challenges of managing and mitigating hallucinations in large language models. Despite their advanced capabilities, these models sometimes produce hallucinations—incorrect or fabricated information. Can we define these hallucinations as errors? In this article, I aim to understand these hallucinations and present strategies to manage and hedge against them.

Obsessed by accuracy

For over fifty years, computers consistently delivered accurate results, with their sophistication and precision only improving. Yet in 2024, while you can trust a GPS to navigate you to a destination accurately, you can’t rely on the world’s most advanced AI to consistently provide factual information.

If you input a simple historical fact into any advanced AI, you might find the answer is surprisingly close but still incorrect. Why is that?

Are hallucinations simply errors, or are they fundamentally different from what we traditionally consider errors? Let's explore this crucial and complex question in detail.

A vital aspect of the problem involves a significant shift in the definition of “AI” over the last 30 years. Historically, programming computers meant finding exact solutions to problems. For example, a GPS uses precise algorithms to calculate routes accurately. These exact methods, based on well-defined logical and mathematical principles, were traditionally seen as the pinnacle of artificial intelligence.

In the past, AI was synonymous with systems that followed clear, rule-based procedures to solve specific problems. These systems were designed to produce provably correct outputs, such as calculating a route from point A to point B or performing a specific task according to predefined rules.

However, today’s AI landscape is dominated by Machine Learning. Unlike traditional programming, which relies on deductive logic to produce correct outputs, machine learning involves creating models that generate predictions based on patterns in data. These predictions are inherently probabilistic and are expected to be occasionally wrong. Instead of following a fixed set of rules, machine learning algorithms learn from large datasets and make educated guesses, which can lead to errors.

In the first section of this article, I will explain the fundamental differences between machine learning and older types of AI. I will explore why machine learning systems, which are based on statistical inference and pattern recognition, are more prone to errors compared to traditional rule-based programs. This shift from deterministic to probabilistic models is a key reason modern AI systems can produce unexpected and sometimes incorrect results.

A Crash Course in Machine Learning

In the early days of AI, most systems were designed to solve very specific problems, such as predicting whether a user would click on a link, identifying objects in an image, or estimating a stock's future value. Each task was handled by a discrete computer program explicitly created for that purpose.

Building such programs initially involved reasoning from first principles. For example, predicting how long it would take an apple to hit the ground after falling involved understanding the principles of gravity, much like Newton did. While this method worked well for some problems, it was impractical for others, like identifying objects in images, where deriving solutions from first principles is extremely challenging.

This is where machine learning comes into play. Instead of trying to understand the underlying principles (e.g., Newton's law), machine learning algorithms look at vast amounts of data to find patterns and make predictions. For instance, by analyzing data from millions of apples falling from different heights, an algorithm can predict fall times without explicitly understanding gravity.

The process of building such a system is called supervised learning. Here's how it works:

  1. Collect Data: Gather a large dataset relevant to the task.
  2. Label Data: Manually label each data point with the correct answer.
  3. Training: Show the labeled data to the computer, let it make guesses, and score them based on accuracy. Repeat this process many times to improve the guessing strategy.

The resulting system is called a model. When this model guesses from a set of possible labels, it's called a classifier, and its guesses are referred to as predictions.

Machine learning differs significantly from traditional approaches like Newton's. While Newton sought to understand the fundamental principles of physics, a machine-learning model focused solely on reproducing data patterns accurately. This method might not explain the phenomena but effectively incorporates real-world complexities that purely theoretical approaches might miss.

In the past 15 years, it has become evident that supervised learning can tackle much more complex tasks than initially thought. With large datasets, models can handle various inputs and outputs. For example, image diffusion models like Stable Diffusion can generate entire images instead of just classifying them into a few categories.

However, building these complex models requires enormous amounts of data, often billions of labeled examples, which is impractically time-consuming to label manually. This challenge led to the development of self-supervised learning, where models are trained using data that can be labeled automatically. For instance, by breaking down sentences from the internet into parts and predicting the missing words, vast training data can be generated without manual labeling.

Fundamentally, training modern generative AI systems like GPT follows the same principles as traditional classifiers: show the model incomplete examples, have it guess the completions, and score its guesses. The innovations lie in efficiently creating large training datasets and developing new types of models capable of handling complex tasks.

While traditional AI might make mistakes, such as a digit recognizer misidentifying a '7' as a '9', machine learning models inherently produce errors because they predict based on patterns rather than deterministic logic. This tendency to err improves with more data and larger models, but it highlights the core difference between classical AI and machine learning.

In summary, the evolution from rule-based systems to machine learning has transformed AI from solving narrowly defined problems with exact solutions to making probabilistic predictions based on data patterns, enabling it to tackle a broader range of complex tasks.


Are hallucinations a nice synonym of errors, or are they something different?

In AI, a model sometimes sees a picture of a 7 and mistakenly identifies it as a 9. This kind of error is straightforward and has always been part of AI. However, when a chatbot produces incorrect information, we often refer to it as a "hallucination." Why is this distinction made?

Both large language models (LLMs) and classical classifiers are similar in how they are built. Like a complex classifier, an LLM is trained to predict the next word in a sequence, just as a digit recognizer is trained to identify numbers. The primary difference lies in complexity and scale. But more importantly, how we use generative AI systems vastly differs from how we use traditional classifiers.

When we deploy a digit recognizer, it is used for the specific task it was trained on—recognizing digits. For instance, it might be used in a system that reads handwritten checks to identify the amount written.

Generative AI systems, on the other hand, are used differently. When an LLM is deployed as a chatbot, it shifts from predicting the next word in an existing sentence to generating the next word in a new and non-existent text string. This shift is significant because it means there is no correct answer to which to compare its output, unlike traditional classifiers, where we can quickly evaluate accuracy.

For example, if you input the string "What is 2 + 2?" into a chatbot, the next word could be "4", "is", or "equals", among others. Each of these could lead to a different valid response. However, there are also incorrect possibilities like "5" or "2 + 2 = 5". The task of the LLM is fundamentally different because there’s no pre-existing correct answer to guide it.

An error in classical machine learning, like a digit recognizer calling a 7 a 9, is easy to identify because the correct answer is known. However, generative AI, like ChatGPT, generates text based on patterns and predictions without a definitive reference. For instance, if ChatGPT says that in 1969, the Beatles reunited for a secret concert in New York, it’s fabricating information that doesn’t exist.

The way ChatGPT produces such an answer involves making a series of word predictions that individually might seem reasonable but collectively create a false statement. These predictions can’t be easily classified as right or wrong on their own, which complicates the issue.

The distinction between an error and a hallucination becomes clear when considering the context. For traditional classifiers, errors occur when the output doesn’t match the known correct label. For generative AI, hallucinations occur when the output fabricates information that doesn’t correspond to reality. The nature of these systems means they can produce outputs that seem coherent but are factually incorrect.

Understanding this difference is crucial. While a digit recognizer’s error is straightforward, a chatbot’s hallucination involves more subtle and complex inaccuracies. This complexity arises because the text generated by a chatbot depends not only on patterns in the training data but also on its ability to create seemingly plausible but ultimately false narratives. Therefore, addressing hallucinations in generative AI requires a different approach than simply reducing errors in classical models.


Potential Ways to Handle Hallucinations in Generative AI

Handling hallucinations in generative AI systems like ChatGPT requires a multifaceted approach, given the complexity and nature of the problem. Here are several strategies that could be employed:

Enhanced Training Data: Increasing the quantity and quality of training data can help improve the model's accuracy. More diverse and comprehensive datasets can reduce the likelihood of the model generating fabricated information. Collect and curate large-scale, high-quality datasets from reliable sources. Ensure the training data covers a wide range of topics accurately.

Contextual Awareness and Fact-Checking: Integrate systems that cross-reference generated content with reliable databases and factual information. Develop modules that fact-check statements against trusted databases in real time. For instance, linking the AI to databases like Wikipedia or specialized knowledge bases can help verify facts before they are included in the generated output.

Post-Processing and Validation: Implement post-processing steps to review and validate the generated text before presenting it to the user. Use additional algorithms or human oversight to scrutinize the output. This can involve semantic analysis to ensure coherence and factual accuracy.

Reinforcement Learning with Human Feedback (RLHF): Incorporate human feedback into the training process to fine-tune the model's responses. Continuously collect feedback from users and experts on the accuracy of the AI's outputs. Use this feedback to retrain the model, focusing on reducing the occurrence of hallucinations.

Prompt Engineering and Constraints: Design prompts that minimize the risk of generating hallucinations by guiding the model towards more accurate responses. Create templates and constraints within prompts that limit the model's output to verifiable information. For example, asking the AI to cite sources or provide evidence for its statements can reduce fabrications.

Hybrid Models: Combine generative AI with rule-based systems to leverage the strengths of both approaches. Use rule-based systems to handle factual queries while employing generative models for more creative tasks. This ensures accuracy where needed and creativity where appropriate.

User Education and Transparency: Educate users about the limitations of generative AI and promote transparency regarding its capabilities and shortcomings. Provide disclaimers about the potential for hallucinations and encourage users to verify critical information. Transparency about the AI's training data and methods can also build trust and understanding.

Regular Audits and Updates: Conduct regular audits of the AI's performance and update the model to address any identified issues. Set up a periodic review process where the AI's outputs are systematically evaluated for accuracy. Use the findings to improve the model continuously.

By combining these strategies, it is possible to mitigate the problem of hallucinations in generative AI systems. Each approach addresses different aspects of the issue, from improving the underlying data and algorithms to enhancing the interaction between humans and AI.

Philosophical and Practical Implications

The debate over whether all generative AI outputs are hallucinations has significant implications. If we accept that these models generate outputs by reconstructing imagined data, we must rethink how we evaluate and trust AI systems. Some AI researchers view hallucinations as a defining feature of generative AI, likening them to "dream machines" that creatively synthesize information.

However, this perspective also highlights the challenges in reducing hallucinations. Traditional methods like increasing data or model size might not suffice. Instead, innovative approaches may be necessary, such as new training methods or hybrid systems.

In conclusion, addressing hallucinations in generative AI requires a multifaceted approach, combining technical improvements with user education and transparency. While hallucinations present a significant challenge, understanding their nature and exploring diverse strategies to mitigate them can lead to more reliable and valuable AI systems.

Michel Stevens

Customer Experience Director | Speaker

4 个月

A parrot doesn't know what it's saying. A parrot doesn't know what it means. A parrot sees or hears a pattern that it recognizes and "speaks". Replace the word parrrot by AI in the lines I just wrote and it will make sense. Replace the word AI by "parrot" whenever you read something about genAI and everything will make sense.

Christopher Brooks

Global Customer Experience Management Consultant

4 个月

Thank you for sharing your wisdom and your consideration about managing hallucinations. I found this a fascinating read and one of the best articles I've read on AI.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了