How to Outsmart GenAI Hallucination

How to Outsmart GenAI Hallucination

GenAI tools have the tendency to respond with inaccurate, contradicting, or irrelevant information. Understand why this happens and how you can outwit genAI.

What is genAI hallucination??

GenAI hallucination is when a generative AI tool provides answers with false information, made-up data and events, contradicting sentences, or irrelevant information.? When it hallucinates, a genAI tool answers as if it authoritatively presents facts, in its desire to please its user.?

GenAI can converse about a fictitious character.  Know that you an outsmart genAI hallucinations!
A sample conversation with genAI about a fictitious character.


In hallucination, a genAI tool answers as if it authoritatively presents facts, in its desire to please its user.?
Don't be fooled by lengthy answers! Images from

Why does genAI hallucination happen in the first place??

The nature of genAI largely contributes to the tools’ tendency to hallucinate.? Here are some specific factors for genAI hallucination:??

  • Outdated, low-quality, inaccurate, or biased training. ?GenAI tools are trained to generate human-like responses after being fed with massive data from the internet.? This means the answers of genAI tools are based on these training data, and old or deliberately erroneous data can shape a genAI output.?

Popular genAI tools like Google Bard are transparent with the disclaimer that the tool will make mistakes.
Popular genAI tools like Google Bard are transparent with the disclaimer that the tool will make mistakes.


You can outsmart genAI hallucination!  Know your genAI tool's capabilities and limitations.
Know your tool's capabilities and limitations.


  • Guardrails. The implementation of guardrails in genAI tools affects hallucination.? These guardrails train genAI tools how to answer if there is no data available, or limit the tools to connect with only reliable sources.? Some genAI tools have firm guardrails, while some do not have the same guardrails.?
  • ?Absence of judgment and sense of reality.? GenAI is trained to string words together and generate new content based on what users ask.? However, genAI tools have no judgment, nor does it apply logic to the output it churns out.? Users can depend on genAI about grammar and expanded ideas, but it can not be relied on to adhere to logic, facts, appropriateness, and reality.?

An example of genAI hallucination: A genAI response can be grammatically correct but not factual.
A genAI response can be grammatically correct but not factual.


Watch out for incoherent or irrelevant genAI output. Images from


  • Unclear prompts.? Because genAI has no sense of reality, genAI tools may not necessarily understand slangs or idioms in users’ queries (These queries are also called “prompts”).? The tools may answer these queries literally and end up hallucinating.? ?Likewise, because genAI can not be depended on for logic, contradicting or unclear statements in users’ prompts may influence genAI tools to answer with incoherent or make-believe output.?

How can one work around genAI hallucination??

  • Use specific prompts.? The quality of the result from the genAI tool entirely depends on how specific the user’s prompt is. ?Open-ended questions may lead to potential hallucinations.? Additional context in prompts can help limit genAI responses to accurate information only.?

Instruct genAI to remove fictional mentions, so you can prevent genAI hallucination.
An example of a specific prompt is instructing genAI to remove fictional mentions.


Some examples of specific prompts are instructions such as:?

  • role playing - “You are a writer for a tech website. Write an article about X, Y, and Z.”?

Instruct genAI to role-play, to prevent genAI hallucination.
Tell the genAI tool to take the role of a credible source.


  • yes/no questions - “Did the name ABC appear in the story 123, yes or no?”

Asking "yes or no?" may prevent genAI hallucination.
Use clear and specific prompts that include the question "yes or no?".


  • explicitly telling the genAI tool to only tell the truth - “Did the name ABC appear in the story 123? If you don’t know, say ‘I don’t know’”?

Prevent genAI hallucination by telling the tool to admit if it does not know the answer.
Explicitly get the genAI tool to admit if it does not have the answers. Images from


  • Verify genAI output.? With or without specific prompts, it will still be wise to evaluate genAI output against credible sources, given that genAI is not a reliable source for logic, facts, appropriateness, and reality.?
  • Combine the human touch and genAI technology. Use genAI as a starting point.? Key words, ideas, summaries, and outlines can come from genAI tools.? The human touch is still the best source for insights, critical thinking and creativity. ?

#genAI #generativeAI #prompts




要查看或添加评论,请登录

JG Summit Digital Transformation Office (DTO)的更多文章

社区洞察

其他会员也浏览了