Last month, I was honored to be invited to present at the ESOMAR: Art & Science of Innovation market research conference in Chicago. The event brought together hundreds of insights professionals from top global brands, research institutions, and technology companies.
My presentation was on the topic of AI hallucination. It’s easy to spot hallucination in descriptive analytics use cases, such as regurgitating facts from a data set, but it becomes a lot trickier to pinpoint in predictive analytics and when using techniques such as Retrieval-Augmented Generation. Ultimately, what becomes most important in predictive analytics is the business outcome.
What if we could train computers to “think” like humans, with an element of unpredictability? Could we achieve better outcomes at scale? Could the “randomness within reason” factor be the missing link to generating synthetic consumer response data that actually mirrors the intricacy and breadth of real human responses?
Just like a human brain, computer natural language processing methods must include these four steps:
1. Interpreting the question
2. Retrieving relevant information from memory
3. Forming judgments
4. Composing the response
The third step, forming judgments, is the most prone to bias both in humans and computers alike. But that’s also where the magic lies.
If you’d like to learn more about how Native AI’s Digital Twins were built to mirror the four cognitive steps of the human brain, please get in touch!