Generative AI: It’s All A Hallucination!
Bill Franks
Internationally recognized chief analytics officer who is a thought leader, speaker, consultant, and author focused on analytics, data science, and AI
No business executive has been able to avoid the excitement, concern, and hype that has surrounded the generative AI tools that have taken the world by storm over the past few months. Whether it’s?ChatGPT?(for text),?DALL-e2?(for images),?OpenAI Codex?(for code), or one of the myriad other examples, there is no end to the discussion about how these new technologies will impact both our businesses and our personal lives. However, there is a fundamental misunderstanding about how these models work that is fueling the discussion around what is known as the “hallucinations” that these models generate. Keep reading to learn what that misunderstanding is and how to correct it.
How Is AI Hallucination Being Defined Today?
For the most part, when people talk about an AI hallucination, they mean that a generative AI process has responded to their prompt with what appears to be real, valid content, but which is not. With ChatGPT, there have been widely circulated and easily mimicked cases of getting answers that are partially wrong or even fully untrue. As my co-author and I discussed?in another blog, ChatGPT has been known to completely make up authors of papers, completely make up papers that don’t exist, and describe in detail events that never happened. Worse, and harder to catch, are situations such as when ChatGPT takes a real researcher who actually does work in the field being discussed and makes up papers by that researcher that actually sound plausible!
It is interesting that we don’t seem to see as many hallucination issues raised on the image and video generation side of things. It seems that people typically understand that every image or video is largely fabricated to match their prompt and there is little concern about whether the people or places in the image or video are real as long as they look reasonable for the intended use. In other words, if I ask for a picture of Albert Einstein riding a horse in the winter, and the picture I get back looks realistic, I don’t care if he ever actually rode a horse in the winter. In such a case, the onus would be on me to clarify wherever I use the image that it is from a generative AI model and not real.
But the dirty little secret is this … all outputs from generative AI processes, regardless of type, are effectively hallucinations.?By virtue of how they work, you’re simply lucky if you get a legitimate answer. How’s that, you say? Let’s explore this further.
Yes, All Generative AI Responses Are Hallucinations!
The open secret is in the name of these models – “Generative” AI. The models generate a response to your prompt from scratch based on the many millions of parameters the model created from its training data. The models do not cut and paste or search for partial matches. Rather, they generate an answer from scratch, albeit probabilistically.
This is fundamentally different from search engines. A search engine will take your prompt and try to find content that closely matches the text in your prompt. In the end, the search engine will take you to real documents, web pages, images, or videos that appear to match what you want. The search engine isn’t making anything up. It can certainly do a poor job matching your intent and give you what would seem to be erroneous answers. But each link the search engine provides is real and any text it provides is a genuine excerpt from somewhere.
领英推荐
Generative AI, on the other hand, isn’t trying to match anything directly. If I ask ChatGPT for a definition of a word, it doesn’t explicitly match my request to text somewhere in its training data. Rather, it probabilistically identifies (one word at a time) the text that it determines to be the most likely to follow mine. If there are a lot of clear definitions of my word in its training data, it may even land on what appears to be a perfect answer. But the generative AI model didn’t cut and paste that answer … it generated it. You might even say that it hallucinated it!
Even if an underlying document has exactly the correct answer to my prompt, there is no guarantee that ChatGPT will provide all or part of that answer. It all comes down to the probabilities. If enough people start to post that the earth is flat, and ChatGPT ingests those posts as training data, it would eventually start to “believe” that the earth is flat. In other words, the more statements there are that the earth is flat versus the earth is round, the more likely ChatGPT will begin to respond that the earth is flat.
Sounds Terrible. What Do I Do?
It actually isn’t terrible. It is about understanding how generative?AI?models work and not placing more trust in them than you should. Just because ChatGPT says something, it doesn’t mean it is true. Consider ChatGPT output as a way to jump start something you’re working on, but double check what it says just like you’d double check any other input you receive.
With generative AI, many people have fallen into the trap of thinking it operates how they want it to operate or that it generates answers how they would generate them. This is somewhat understandable since the answers can seem so much like what a human might have provided.
The key is to remember that generative AI is effectively producing hallucinations 100% of the time. Often, because of consistencies in their training data, those hallucinations are accurate enough to appear “real”. But that’s as much luck as anything else since every answer has been probabilistically determined. Today, generative AI has no internal fact checking, context checking, or reality filters. Given that much of our world is well documented and many facts widely agreed upon, generative AI will frequently stumble upon a good answer. But don’t assume an answer is correct and don’t assume a good answer implies intelligence and deeper thought processes that aren’t there!
Originally published on CXO Tech Magazine
Leveraging analytics and technology to build a better future.
1 年Bill Franks. I think this is the best way to explain ChatGPT to my students and friends I have seen anywhere. As always, appreciate you sharing your insights Bill.
Data Scientist, Author, Consultant
1 年I have found that ChatGPT is decisively wrong at about the same rate as my human colleagues.
Licensed real estate agent helping you find your way home.
1 年Great article Bill Franks! Was just having this conversation yesterday. It seems that we, as humans, are always looking for the next big thing and this is definitely it. I do think there are some really promising near term possibilities in terms of code generation and assistance for engineers. Still have to validate and test, but productivity gains seem possible.
Distinguished Professor in Business | Director, Business Operations & Analytics Lab | President, Advanced Operations Consulting & Training | Developer, Master of Science in Business Analytics
1 年Great post. Thank you Bill Franks
CDO - Piedmont Healthcare | Business Intelligence innovator and Exasol Xpert. Governing member of Atlanta CDO community.
1 年Great points! I do think with the advent of AI Agents, there will be toolkits that specialize in overcoming these obstacles. It will be models checking models. Since this topic is about generative AI, I figured I’d let Apple’s generative model have the final word: “I will be sending you a link to the model that I have on my phone and I will send you the link to the models that I have on my phone so you can see what I have on my phone but I will send you the information on the model and the model number.” - Apple Generative AI ??