The Magic Triangle of Enterprise GenAI Chat Implementations

The Magic Triangle of Enterprise GenAI Chat Implementations

The current most frequently fulfilled corporate GenAI macro use cases are backed with some kind of RAG architecture based, internal or external GenAI chatbots. It's evident: "If LLMs can handle and give back all the information in our natural language let's develop a RAG framework around them, give them all our knowledge and let the users just ask like from a new omniscient supercolleague!" - says the decision maker.

Now, in this last sentence there are the three main factors which seem so obvious that in the heat of implementing the new AI-toy many of the companies may just overlook them. Moreover, due to the inherent non-deterministic nature of actually pre-trained transformer-based AI systems, these factors are "little different" compared to a classical IT system implementation resulting "some consequences". And if we don't want to see the frustrated faces of the mass of users (and the CxOs) it is essential to understand the basics. So let's overview them in a short and very simplified way.

?

Factor #1: The Solution

It's as natural as. But let's face it: the heart of the system, the LLMs are probabilistic blackboxes today. There are certain ongoing efforts to improve transparency but we are not at the end of the road. And the RAG frameworks built around these LLMs are also probabilistic and comparative systems with various types of techniques (just some RAG-buzzwords: query augmentation, sub-query handling, hybrid search, re-ranking process, categories and hierarchies, meta-data filtering, knowledge graphs, etc.) each with the purpose of handling these inherent characteristics. RAG frameworks are relatively new in the IT field, less competence exists on the market and on the top of that, each month there is a release of a new technical advancement for RAG systems. Thus any development aiming to implement a new one is more like an R&D project with its all iterative and try-and-fail circles.



Factor #2: The Knowledge

All the GenAI based corporate language solutions today are actually knowledge transformers. It means knowledge is the starting point. Yet few companies realize their AI chat PoC / implementation project is a knowledge management project. This is a completely AI solution-agnostic factor, whatever use case a company wants to implement using any solution, however, this is the least planned part of all AI implementation. For every GenAI stuff companies need ensuring that handled corporate knowledge is in order latest by Go-Live date and ensuring the continous maintenance of this organized knowledge. Period. And trust me: not the development / implementation, but knowledge management is the hardest part. (That's why I tell everywhere to everyone: if you don't want to spend a cent, yet just start your work with knowledge management, now. You will be fine by the Go-Live date.)

?


Factor #3: The Users

Classical IT systems have their input quality assurance features - users are constantly facing strict rules on input fields, well defined functions and processes. Contrary to this any GenAI chat solution may have "only one" input field but the variability of the input data received through this tiny window can surpass the input data variability of a classic system. And if we think of the main use cases of AI chat applications (information gathering, searching, summarizing, analyzing, problem solving, converting, translating, creating, generating,? etc.) we see that in case of an AI chat system the end-user is one of the persons who defines the function and the requirements when writing the question/prompt through the input field.

This can be restricted on system prompt side but these restrictions never can reach the level of a classic IT system’s restriction level. In other words: LLM based AI chatsystems with the basic purpose of open question type usage always depend more on the user competence and user input quality compared to a classic IT system. And this may lead to the first major dissapointment caused by the drop in the answer quality after Go-live when tons of users start to enter their never seen questions and prompts in never seen structure and format.


Here I wrote about RAG based AI chat solutions but good to know: the same factors have to be considered implementing almost all kind of GenAI applications. And as we see, the Magic Triangle may become Bermuda Triangle quickly if we try to handle the process as a simple IT implementation project.


Now, when we saw all the main factors had some additional problematic aspects compared to the implementation of a classic IT system we can ask: okay but how we can handle them?

We will dive deeper into each topic in the followings.

Thank you and stay tuned.

[ ImprovAI Consulting ]


If you missed my previous articles in the topic you may check them:

You, the Knowledge and the AI - The True Story

Introduction to AI-land #4 - Augmentation

This is an insightful breakdown of the challenges in implementing RAG-based GenAI systems.

回复
Sagar Narang

Entrepreneur & Tech Leader | CEO @ Elcom Digital | Ex-Amazon, Intel, IBM

4 个月

For real, that foundation's key! Solid knowledge management can really separate the winners from the flops. What are your thoughts on tackling that?

回复
Nilesh Kumar

Associate Director | Market Research | Healthcare IT Consultant | Healthcare IT Transformation | Head of Information Technolgy | IoT | AI | BI

4 个月

Hektor Villarroel, it's wild how many companies overlook those essentials. Knowledge management really is the backbone of a solid AI system

回复

要查看或添加评论,请登录

Hektor Villarroel的更多文章

  • Artificial Intelligence - Our Karma

    Artificial Intelligence - Our Karma

    The word Karma is often used in various senses, and many know its near-original meaning which refers to actions…

    6 条评论
  • The 8 Not So Noble Obstacles of AI Use

    The 8 Not So Noble Obstacles of AI Use

    In my last article I pointed Users as one of the three main success factors of GenAI implementations and on paper, the…

    1 条评论
  • Introduction to AI-land #4 - Augmentation

    Introduction to AI-land #4 - Augmentation

    As we explore more and more the blooming meadows of AI and begin to realize the importance of the use of special…

  • Introduction to AI-land #3 - Prompting

    Introduction to AI-land #3 - Prompting

    Now, after reading and watching tons of tutorials we are (almos) deep experts of AI, GenAI and LLMs, but still missing…

  • Introduction to AI-land #2 - Courses

    Introduction to AI-land #2 - Courses

    Last time we got a glimpse into the world of AI through a couple of articles, and now we can deepen this knowledge by…

  • Introduction to AI-land #1

    Introduction to AI-land #1

    AI, with the mystique of Machine Learning, Deep Learning, various technical terms, and the hundreds/thousands of…

  • You, the Knowledge and the AI - The True Story

    You, the Knowledge and the AI - The True Story

    In my experience, many people are not fully aware of one of the most important parts of using Generative AI (a.k.

  • AI és az evangéliumok

    AI és az evangéliumok

    AI evangélista lettél? - kérdezik páran mostanában. értem, értem a kérdést, de nem gondolom, hogy a Mesterséges…

    4 条评论
  • About AI, about hype and about Time

    About AI, about hype and about Time

    In an unusual move, OpenAI recently has revealed customer subscription data for its Enterprise suite, providing a small…

  • Until when will holography be just a kind of sci-fi?

    Until when will holography be just a kind of sci-fi?

    Before diving into serious topics, let's start off with a light, yet serious news. Did you notice the holographic head…

社区洞察

其他会员也浏览了