Insight vs. Hallucination: Can You Trust ChatGPT's Data Inferences?

Insight vs. Hallucination: Can You Trust ChatGPT's Data Inferences?

ChatGPT has smitten a lot of people, but some of us have unreasonable expectations from it.

CEO of a company that my friend is a co-founder of, Sara*, is one of them. Sara believes that ChatGPT should be able to analyze all the data that her company collects for her clients and give a ‘causal’ answer for ‘why did the production go down this morning’. I mean, not only a simple answer that the production line 7C went down, but the actual reason as to ‘why did the assembly line went down?’

In fact, it is not just Sara, even Sam Altman thinks this is possible – and YCombinator has funded two companies that are trying to solve this problem using ChatGPT. I have been following some of those startups and the results are not very promising as of now. Here is why I believe ChatGPT is is not a great fit for this use-case:

Data privacy & security: GPT models will want to copy and consume your data before giving you any answers – so, why would you want to give your data to a GPT model? Pretty much 100% of the executives I have spoken with are NOT ready to give all of their data to any external entity, let alone ChatGPT.?

Cost: Cost is per query is high and it can quickly spin out of control for two reasons:

  • A lot of users asking questions – without a good understanding of what to expect.
  • As you give more and more data, the costs go up more than proportionally.

?Short-Term Memory Constraints:? Even if you agreed to suck it up and give your data to ChatGPT, you can load and process only so much data before you will start hitting memory limits especially if your data requires referencing information from earlier data or other data-sets. This shortcoming can hinder their ability to identify complex relationships within the data.

Right is not a right: What is great for lovely images and flowery language is not great for data, ask any deep thinking (and likely boring) data scientist. Getting accurate answers is not a guarantee with ChatGPT, you will likely come across great sounding answers at the expense of factual accuracy.? Misinterpretations or inconsistencies in the data can lead the model to exaggerate misleading conclusions.

Limited Domain Knowledge:? At-least as of now, I am not ready to recommend to my friends that ChatGPT has reasoning capability to choose what model to use for what problem and data. It lacks the specialized understanding of statistical methods and data structures crucial for in-depth analysis.? Its outputs, such as charts and graphs, may appear statistically sound but could lack the underlying rigor required for reliable conclusions.


What my conversations with Sara confirmed is that 'world needs a better, cheaper, faster way to analyze data and have the talent focus on complex problems.' This is the exact problem that QuaerisAI solves. It frees up your resources from doing analysis and drag-drop fields to make pretty (and useless) dashboards, so they can focus on meaningful tasks on engineering and science.

In closing, yes ChatGPT offers intriguing capabilities, but it can at-best be supplementary tools in the data world. For comprehensive and reliable data analysis, human expertise and critical thinking remain irreplaceable. Unfortunately, Sara may not achieve her vision with ChatGPT, and she may not get OK from her customers to share their data with ChatGPT.

Next week, I will share my musings about why Microsoft will not see 10x-30x return on their $10Bn investment in OpenAI.

?*: True story, different name.

Rishi Rana

President/CXO | PE Operator | Board Member | Strategic Advisor | Mentor | Product Excellence | Engineering Transformation | Operational Excellence

5 个月

Greats read Rishi Bhatnagar! Thanks for sharing! Agree that further analytical analysis will require human intervention as hallucinations may be an intrinsic property of the current LLM models stemming from their inherent limitations in handling ambiguous contexts.

Sarah Guidry

SVP, Head of Analytics & Corporate Strategy at LendingTree

5 个月

What a great article, Rishi Bhatnagar. ChatGPT's technology is an unlock for so many things but applying it in the wrong ways can be detrimental on a few fronts - one, letting it run and getting the wrong answers, resulting in making the wrong decisions. Two (and maybe less obvious) is the buy-in you'll lose in 'the art of the possible' by jumping to solutions that aren't applicable to the tool. Just because you have a hammer, does not mean everything is a nail. We should embrace the technology for sure, but also be aware of its limitations. Pushing ChatGPT in places where it doesn't fit will set us back.

Brady Smith

Director of Business Intelligence at Kalkomey Enterprises, LLC

5 个月

great read, thanks Rishi!!!

Gary Cao

Advisor to CEOs and Boards on AI Analytics Data Strategy Roadmap | Serial Founder of 8 Data Analytics Internal Startups across Industries | Board Member

5 个月

Thank you Rishi Bhatnagar for the real battle stories from the frontline. Sharing observations and bouncing ideas among practitioners would be helpful to all members of our professional community. Frustrations and gaps also mean opportunities, but change is hard :)

Kevin Petrie

Vice President of Research at BARC

5 个月

Rishi, good stuff. Demand for smart humans, data/business analysts included, remains strong by all measures. ChatGPT can make smart people more productive, but only if they have reasonable expectations and inspect the outputs.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了