Let's Forgo AI 'Hallucinations'
Original AI generation via Canva prompt

Let's Forgo AI 'Hallucinations'

Maybe it’s just a product marketer’s concern, but how ‘bout we stop using the term ‘hallucination’ to describe unsatisfactory, or even totally unacceptable Generative AI outputs?

Yes, when it comes to hallucinations, there’s been some doozies (and thank you Copilot):

  • A Fortune 500 technology company's chatbox incorrectly claimed that the James Webb Space Telescope captured the world's first images of a planet outside our solar system.
  • An even bigger Fortune 500 technology company, using 'AI Sydney', admitted to falling in love with the company's users, and spying on their employees.
  • A mammoth social media company--with their own AI division--pulled their LLM demo in 2022, after it provided users inaccurate information, sometimes rooted in prejudice.

But if we look at the true source of inaccuracies, poor quality, and incomplete data availability in a given domain knowledge base, we best acknowledge these down-to-earth issues are human-based, and best handled by human-in-the loop.

The good news is that by injecting proprietary context via LLMs, pre-tested quality-controlled prompts, and by fine-tuning the reliability and accuracy of GenAI predictions, we surely can replace ‘hallucinations’ with only an occasional “OOPS!”

Plus, the overused term ‘hallucinations’ is truly unhelpful, and technically, a non-descriptive metaphor.

Not to mention outdated.

Sorry Timothy Leary. :-)


?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了