It Is Time to End the Generative AI Craze
Prompt by M. Konwiser, Generated by GPT4/DALL-E

It Is Time to End the Generative AI Craze

Last week's lessons from the AI conference taught us a few important things. Precision, objectivity, governance, and accountability are essential components of a successful AI deployment in Enterprise and AI is here to stay.

While there's nothing wrong with AI's permanence in our culture and our business process, we are still in the "craze phase".

According to the Oxford dictionary, a craze is:

an enthusiasm for a particular activity or object which appears suddenly and achieves widespread but short-lived popularity.

I suppose the definition of short-lived can be debated. It could be minutes or years. Frankly I believe for generative AI, we're near the apex of the movement right now.

While modern AI models are capable and relatively easy to use, the masses have generally stopped discussing the existence and value of other traditional models available.

Generative is Not Snake Oil or a Panacea

The capabilities of transformer models with their more advanced unsupervised learning, RAG (the ability to use personal/corporate documents to enhance AI's responses), and human language interface is not a scam or deceptively marketed. Generative AI can really do those things.

However, people and businesses have reached a point where they are implementing Large Language Models for everything. No matter how much an LLM may look like a hammer though, not everything is a nail.

Chatbots existing before generative AI, and they did their job. Generative added a more human interface to it and a more natural type of interaction between machine and human but it also added complications.

Generative has two basic core instructions: provide responses to a prompt, and accept feedback from the human operator on how to be better.

There have already been cases which I've written about previously where a generative ChatBot followed those core instructions exactly and ended up giving the customer what they wanted, even if the business it represented couldn't make good on the commitments.

In one case, a business posted a disclaimer that their AI bot didn't have all the answers and a live operator would need to offer a final answer, so if a human is needed regardless, why would the LLM venture into the territory of making unbacked commitments instead of immediately deferring to a live person?

The system wasn't tuned and inferenced (tested) throughly enough to account for someone asking something outside of a more predictable conversation pattern.

In a second case, a LLM presented concrete information about a return policy that did not even exist. The customer asked their question with no deception or malicious intent, but the company's policies were so confusing and contradicting and the model, using its core purpose, did what it could to be factual while still offering the customer what they wanted.

The company, when challenged to make good on the LLM's commitments, tried to cut the LLM loose. They said that the LLM wasn't an agent of theirs as was an independent operator, and they shouldn't and couldn't be held liable for the commitments. Courts and public opinion disagreed.

So why then use an LLM at all?

LLMs have the ability to retrieve and present a tremendous amount of accurate, valuable information without a human as well, but both the business and the customer need to understand the limitations and vulnerabilities and have the right expectations.

People can use power tools without safety gear or reading the instructions but they're putting themselves at significant risk of personal injury. That's the consumer's fault.

If a business doesn't including complete safety instructions or design a system with safety features and the consumer gets hurt, that's the business' fault.

With the broad introduction of LLMs, both are happening at the same time.

Predictive Still Has Tremendous Value

I was with a customer last week and the purpose was to discuss the application of the latest AI models for their business. I spent 3 hours with two colleagues and a PhD in data science and machine learning attempting to design an approach to position the advantages of generative AI.

While we came up with a very interesting use case, it wasn't so compelling that the customer had a definitive reason to act.

The key was the nature of their business and the need of the team I was speaking with.

This team worked with statistics and probability. They evaluate reams of data from historical performance and activity to anticipate the future. This includes elements of time series analysis which covers everything from financial markets analysis to sales performance and buying behaviors.

Generative AI is guesswork and fabrication. They needed concrete analysis. They need predictive AI.

Predictive AI requires substantial training and tuning, but it also has two benefits that generative does not.

  • Consistent results - predictive AI is built with flowchart style coding. The method predictive AI uses to reach an outcome is consistent, explainable, and transparent. It only varies when the model design changes.
  • Easy to interpret - since the way the model works is easily visible and well understood, the outputs whether they are accurate or erroneous can be adjusted until the desired outcome is achieved.

If you need facts, numbers, and concrete data, predictive AI models are the way to go. They will never fabricate information or manipulate their data to better align with their operator's expectations.

That said, there's also more to the conversation than simply predictive AI and the narrow-scope generative AI category of LLMs.

The Bigger Picture and the Main Link

AI companies have often times conjoined the discussion of generative AI and LLMs, but not every generative AI model is a LLM. In fact, there are myriad models out there which have unsupervised learning, RAG capabilities, and generative properties which are not used for human interaction.

However what LLMs can do when properly tuned and tested is provide an interface between predictive models, non-LLM generative models, and humans.

When DOS was first released, it was a dream for many programmers. Home computers with the ability to accept scripts and code. Lightweight operation, and functional. While it was popular for a group, mainstream home computing didn't become popular until the GUI was developed. Windows, OS/2 Warp, MacOS, NeXT, X, no matter which platform or underlying OS, the ability for these operating system creators to provide an easy to use way for non-coders to interact with computers made home computing mainstream.

That is what LLMs are doing for AI, and that's why AI has become so popular. LLMs have brought access to advanced computing and complex AI models to the masses. It is the "Windows" of the AI world.

Even with that, remember it's not always the visual interface you use that is doing all the work. There is still a complex, advanced OS sitting below it which is feeding and interacting with the interface the home user is looking at.

With AI, we often confuse the interface with the complicated back end. In some cases, for creating writing and graphics, they are the same. For other cases, they are not.

While we should all appreciate the value of generative AI and especially the introduction of LLMs, we also need to let this craze settle down so the focus can return to community and business value of AI for all and not just what cool thing we can "get the computer to say next".

Let's remember there is more than one way to tackle a problem. Not everything is a nail.

Yet another solid proof point on why governance is critical. It will provide a roadmap to select and implement the right combination of AI (predictive and generative) and non-AI to meet business needs without being overly complex or not taking advantage of all the technology available.






Alex Hurtado

Detection Engineering Dispatch | x-IBM QRadar

1 年

What a stance!!!!

回复

要查看或添加评论,请登录

Matt Konwiser的更多文章

  • Learning From AI's Client Zero

    Learning From AI's Client Zero

    The term "Client Zero" is just marketing – don’t buy into the hype. There’s a lot going on under the covers to make…

    3 条评论
  • Synthetic Data is AI's Superhero Companion

    Synthetic Data is AI's Superhero Companion

    You can't move an inch without seeing more news about DeepSeek - but the model doesn't matter. What matters is how they…

    2 条评论
  • DeepSeek Just Helped IBM Win the AI War

    DeepSeek Just Helped IBM Win the AI War

    For years, the large closed source vendors have been promoting the importance of the model and only the model. During…

    10 条评论
  • Is GPT the next TikTok?

    Is GPT the next TikTok?

    We know that attention spans have decreased. We know that "zombie scrolling" is pervasive (I see it daily on the NYC…

    6 条评论
  • AI Chip Makers Will Have A DWDM Moment

    AI Chip Makers Will Have A DWDM Moment

    Most of you probably never saw that acronym before, but without it, the Internet as we know it today wouldn't exist…

    6 条评论
  • Living in the Ai Goldilocks Zone

    Living in the Ai Goldilocks Zone

    Every time a new AI capability comes out, it's either the best thing ever or one second closer to midnight. I've talked…

  • The Importance of TEO (Total Ethics of Ownership) for AI

    The Importance of TEO (Total Ethics of Ownership) for AI

    It's 1964. Rod Serling's "The Twilight Zone" is in full swing.

    5 条评论
  • Collective Intelligence and AI

    Collective Intelligence and AI

    When given an opportunity to choose a topic to speak about within the AI arena for a group of business people, this…

  • AI Use Cases For Emergency Management

    AI Use Cases For Emergency Management

    It all started with a tag. A random thought flew through my head "how does a ChatBot handle an emergency with a human?"…

    1 条评论
  • The Wolf and the Dog; How AI Changes Us

    The Wolf and the Dog; How AI Changes Us

    I recall a video years ago that I cannot find anymore - it showed a domesticated dog and a wild wolf both presented…

社区洞察

其他会员也浏览了