3 Really Bad Use Cases for LLMs

3 Really Bad Use Cases for LLMs

It may appear that you can “GPT-promptiosa” your way into solving anything under the sun. That you can wave the LLM magic wand and unlock hidden treasures. Alas, it’s not true.

There are limitations inherent in the design of GPT large language models. Therefore, we must not try and solve everything with LLMs. We must also not discard the so called “old AI / Narrow AI” approaches. In fact, they are more suitable to solve certain problems.

Here are 3 use cases LLMs are bad at:

  1. LLMs can’t handle Breaking News: Any use case that relies on most recent data is not a good fit for LLMs. There are RAG chain workarounds. If we constantly update data and index them in some order of priority, we could solve this. But, inherently LLMs are not designed for “breaking news”.
  2. LLMs can’t make up their mind: LLMs are token predictors. So, they respond to queries in ways that (oversimplification) best keep the conversation going. They are rather bad at coming to a conclusion based on objective data and cold logic. Therefore, they don't make good decision making applications.
  3. LLMs can’t do Business: Structured, fast changing data is not something LLMs are designed to work with. Business data by its nature is that. So building Generative AI applications with LLMs for business data is quite tricky.

While LLMs may have their limitations, all is not lost for Generative AI. We can look forward to new revolutionary Foundational models. These Foundational models will unlock new generative AI innovation and pave the way for many new use cases. (Check out the new Foundational Model described in this blog).

If you enjoyed this, please share with others who might be interested and subscribe to my Newsletter where I talk about Generative AI and Customer Success.

要查看或添加评论,请登录

Jawad Khan Niazi的更多文章

社区洞察

其他会员也浏览了