AI Hallucinations
Generative artificial intelligence (AI) has become widely popular, but its adoption by businesses comes with a degree of ethical risk. Organizations must prioritize the responsible use of generative AI by ensuring it is accurate, safe, honest, empowering, and sustainable.
What is AI Hallucinations?
An AI hallucination occurs when an artificial intelligence model produces inaccurate information or misleading results but conveys it as if it were factual.
Examples of AI hallucinations
AI hallucinations can take many different forms. Some common examples include:
AI hallucinations can manifest in various ways, with several typical examples being:
To many users, generative AI’s hallucinations are an irritating bug that they assume the tech companies will fix one day, just like email spam. To what extent companies can do so is currently a subject of active research and fierce contention. Some researchers argue that hallucinations are inherent to the technology itself. Generative AI models are probabilistic machines trained to give the most statistically likely response. It is hard to code in human traits such as common sense, context, nuance or reasoning.
领英推荐
Dangers of AI Hallucination.
For example, the risk of harm when an generative AI chatbot gives incorrect steps for cooking a recipe is much lower than when giving a field service worker instructions for repairing a piece of heavy machinery. If not designed and deployed with clear ethical guidelines
Generative AI systems can produce inaccurate and biased content for several reasons:
The Bottom Line
Generative AI tools aren’t always capable of understanding emotional or business context, or knowing when they’re wrong or damaging.
Humans need to be involved to review outputs for accuracy, suss out bias, and ensure models are operating as intended. More broadly, generative AI should be seen as a way to augment human capabilities and empower communities, not replace or displace them.
References