Let's Forgo AI 'Hallucinations'
Charlie Greenberg
Strategic Product Marketing Leader with expertise in go-to-market strategies, content creation, and sales enablement for enterprise software solutions.
Maybe it’s just a product marketer’s concern, but how ‘bout we stop using the term ‘hallucination’ to describe unsatisfactory, or even totally unacceptable Generative AI outputs?
Yes, when it comes to hallucinations, there’s been some doozies (and thank you Copilot):
But if we look at the true source of inaccuracies, poor quality, and incomplete data availability in a given domain knowledge base, we best acknowledge these down-to-earth issues are human-based, and best handled by human-in-the loop.
The good news is that by injecting proprietary context via LLMs, pre-tested quality-controlled prompts, and by fine-tuning the reliability and accuracy of GenAI predictions, we surely can replace ‘hallucinations’ with only an occasional “OOPS!”
Plus, the overused term ‘hallucinations’ is truly unhelpful, and technically, a non-descriptive metaphor.
Not to mention outdated.
Sorry Timothy Leary. :-)
?