A Series of Unfortunate Decisions

A Series of Unfortunate Decisions

When a person asks a question of an LLM, the LLM responds. But there’s a good chance of an some error in the answer. Depending on the model or the question, it could be a 10% chance or 20% or much higher.

The inaccuracy could be a hallucination (a fabricated answer) or a wrong answer or a partially correct answer.

So a person can enter in many different types of questions & receive many different types of answers, some of which are correct & some of which are not.

In this chart, the arrow out of the LLM represents a correct answer. Askew arrows represent errors.

Today, when we use LLMs, most of the time a human checks the output after every step. But startups are pushing the limits of these models by asking them to chain work.

Imagine I ask an LLM-chain to make a presentation about the best cars to buy for a family of 5 people. First, I ask for a list of those cars, then I ask for a slide on the cost, another on fuel economy, yet another on color selection.

The AI must plan what to do at each step. It starts with finding the car names. Then it searches the web, or its memory, for the data necessary, then it creates each slide.

As AI chains these calls together the universe of potential outcomes explodes.

If at the first step, the LLM errs : it finds 4 cars that exist, 1 car that is hallucinated, & a boat, then the remaining effort is wasted. The error compounds from the first step & the deck is useless.

As we build more complex workloads, managing errors will become a critical part of building products.

Design patterns for this are early. I imagine it this way :

At the end of every step, another model validates the output of the AI. Perhaps this is a classical ML classifier that checks the output of the LLM. It could also be an adversarial network (a GAN) that tries to find errors in the output.

The effectiveness of the overall chained AI system will be dependent on minimizing the error rate at each step. Otherwise, AI systems will make a series of unfortunate decisions & its work won’t be very useful.

Max Anfilofyev

Chief CareBot | Scaling Patient Care 8x with AI | Chief Product Officer @ DR | Connect to scale with AI

6 个月

Getting a second Agent validate the first Agent's output is an easy way to reduce hallucinations

Puneet A.

Co-founder and CEO at AIMon | Helping you build more deterministic LLM Apps

6 个月

Great post Tomasz Tunguz. While validation checks are essential for LLMs, the lack of scalable and cost-effective solutions is a major hurdle. In the evaluation phase, Engineers make limited use of expensive API calls to LLMs for checks like Hallucinations. But this doesn't scale to production deployments. At AIMon, we are addressing this problem by building lightweight solutions that validate LLM outputs with low-latency and accuracy on par with GPT4 (based on the industry standard benchmarks). I am happy to grab a slot and chat more about this topic.

回复
Jay B.

?Technophile & Software Creator | C-Suite Network Liaison

6 个月

Factors like model limitations or ambiguous questions can contribute to inaccuracies, ranging from minor misunderstandings to more significant errors. It's important to approach AI-generated responses with a critical mindset and cross-reference information when necessary.

回复
Thiyagarajan Maruthavanan (Rajan)

Managing Partner @Upekkha (SF/India) | 100+ SaaS Founders → Vertical AI Acceleration | Weekly Notes: India × Global Markets x AI.

6 个月

Before hallucination is fixed no enterprise AI will be adopt AI in production. They will be stuck in the pilot to production phase.

回复
Pedro Cortés

SaaS Company? I’ll rewrite your vague landing page into a clear, conversion-focused page in 7 business days.

6 个月

Good point on LLM errors. But what's the game plan for handling these misfires?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了