Echo... echo... echo...
Jenny Griffiths MBE
Follow me to understand and demystify AI. VP of AI Innovation @ Oracle, former AI startup founder.
Why Hallucinations Happen in Generative AI
Generative AI can be incredibly powerful, but it’s not perfect—sometimes, it produces something unexpected or incorrect, known as a "hallucination." Hallucinations in AI occur when the model generates information that doesn’t accurately reflect the data it was trained on, resulting in factually wrong or nonsensical content. So, why does this happen, especially given the intricate pattern-recognition abilities of neural networks?
It all comes down to how the AI is trained and how it predicts the next word, phrase, or element in a sequence.
Pattern Recognition Gone Astray
Neural networks, as we discussed earlier, are trained to recognise patterns in vast amounts of data. However, they don’t have true "understanding" in the way humans do. Instead, they work probabilistically. For example, when you ask a generative AI to write a paragraph, it doesn’t "know" the facts about a topic like a human might. Instead, it predicts what’s most likely to come next based on the patterns it's seen in its training data.
Here’s where hallucinations happen. If the AI encounters a situation where it doesn't have enough reliable data or context to base its prediction on, it can still confidently generate an answer—it just might be wrong. The AI fills in the gaps, using its learned patterns to complete the request, even if the output doesn’t align with reality. The result can be a completely fabricated but seemingly plausible piece of information.
Overconfidence in Predictions
AI models are designed to generate coherent responses based on what they "believe" is the most probable continuation of a sequence. This mechanism works well in structured tasks but can falter when the AI is asked to generate specific or factual information. For instance, if the AI is tasked with generating an answer about an obscure historical event, and it lacks sufficient data on that topic, it might still confidently produce an answer—because that's how it’s programmed. The AI doesn’t know when it’s wrong, so it doesn’t hesitate.
GenerativeAI is famously not fantastic at arithmetic operations for reasons that we'll continue to explore below. Equally, humans are not fantastic at spotting complex mathematical errors. This creates a perfect storm for mathematical hallucinations.
领英推荐
Limitations in Training Data
Another key factor behind hallucinations is the limitations of the AI’s training data. Even though the model is trained on vast datasets, those datasets can’t cover every possible scenario or contain perfect information. If the training data contains biases, gaps, or inaccuracies, the AI may reflect those issues in its outputs. This is especially evident when the AI generates something on a topic it hasn’t "seen" enough examples of, leading to guesswork.
For example, if you ask an AI to write a story set in a highly specific historical era with few examples in its dataset, it may make up facts or blend details from unrelated sources because it’s trying to produce an answer without fully "understanding" the context.
The Challenge of Ambiguity
Humans excel at understanding nuance, ambiguity, and context in ways that AI struggles to replicate. We can distinguish between fact and fiction, or recognise when we don’t have enough information to answer confidently. AI, on the other hand, follows its algorithms to predict what’s most likely, even in ambiguous or ill-defined situations. When faced with ambiguous input, the AI may generate something that fits the pattern, but isn’t grounded in reality—leading to hallucinations.
Preventing Hallucinations: A Work in Progress
Developers are actively working on minimising hallucinations in AI systems. This includes refining training processes, integrating fact-checking mechanisms, and improving how AI models deal with uncertainty. However, the underlying structure of neural networks, which depends on pattern recognition and probabilistic predictions, means that hallucinations are a known challenge for now.
Just remember the power of human in the loop - essentially sense checking answers. You wouldn't believe everything you read on the Internet, so bear in mind that anything that comes back from a GenAI solution could be wonky - so like any content you read, make sure that you question, fact check and apply some common sense.
--
?? Hi, I’m Jenny.? I’m VP of AI Innovation at Oracle’s AI Office.? Before my current role? I ran an AI company for over a decade, so I’ve witnessed first hand the impact AI can have on companies, both large and small.? My team and I are on a mission to make everyone an AI hero, which is why you can follow me for free here on LinkedIn to become an expert.? Please subscribe and share to help spread the knowledge.? I'm creating this newsletter with the help of my robot buddies.
热衷于产品打造的产品经理顾问。 帮助您的企业更高效率的改良,打造,开发新一代,与时俱进的 产品。 让你的产品家喻户晓,走向全世界。
4 个月So generative IA's ability to create is limited to it's existing data set and the patterns exit within it?