The GenAI conundrum

The GenAI conundrum

So you are the CEO of a company and have heard of this wonderful new toy called Generative AI. You call a meeting of your leadership team and declare that henceforth, your company will walk the GenAI path. You put together a task force who are asked to come up with a plan on how to move the company along the path that you have decided is the right path.

The task force uses chatGPT to come up with a list of use cases that they then pass off as their own, spend inordinate amounts of time to come up with jazzy powerpoint slides which they then present to you, along with cost estimates and timelines.

There is one little problem with this though. Not every problem requires a GenAI solution. Conversely, GenAI can't solve every problem.

Let's look at it this way. If you ask chatGPT about use-cases for supply chain management, here are the first four use cases.

Let's pick #4. Route Optimization. Basically, a constrained optimization problem over multiple variables. There are various optimization techniques such as Simulated Annealing, Particle Swarm optimization, Genetic Algorithms and a host of other stochastic optimization techniques that have had years of extensive research behind them and applied to this problem.

Are you really telling me that, currently, a powerful language model, can carry out a multi-variable, constrained, stochastic optimization for me? The answer, most emphatically, is a no.

There are many things that GenAI can do really well. Create content. Summarize text. Text to images. Text to video. Language translations. Code translations. Just to name a few.

So, if you, as a CEO wants to really benefit from GenAI, please identify the right set of problems that are amenable for use with GenAI.

You will save yourself, and your task force, a lot of trouble.

Sridhar Mahadevan

Thinker. Problem Solver. Knowledge Seeker.

1 年

That's where plugins come in to the picture. The ChatGPT already knows how to delegate calculus to Wolfram (Mathematica). Infact ChatGPT already knows it can't find whether a number is prime or not not by guessing the next word using its transformer architecture. Aware of that, it writes code passing the task to SymPy library function isprime and gets it answers. Ofcourse there are many glitches but is getting there. So with the a good catalog of plugins to provide addon skills GenAI/LLM/GPT type machine will be taking on the role of a Maestro in a Symphony Orchestra. It will ofcourse delegate linear programming, optimization & OR tasks to appropriate solvers. The task for it to understand what is expected and extract relevant parameters and delegate or code it. But in future, training on synthetic data and possibly with one or two more fundamental innovations the hope is to get next GPT machines to code and reason better than us. So much that they can code SymPy and Mathematica themselves. Then we can replace CEOs & OR guys & rest of the guys with these Super AGIs :)

Vinay Mehendi, PhD

World's Largest Technographics Provider| India's Best GCC Intelligence Provider

1 年

Screwdriver syndrome ...my advisor used to call it that.

要查看或添加评论,请登录

Arun Krishnan的更多文章

  • A new architecture that incorporates more human-like memory features

    A new architecture that incorporates more human-like memory features

    The one huge drawback of attention models that are ubiquitous in LLMs, is the fact that the memory requirements can…

    3 条评论
  • What's Deep about DeepSeek?

    What's Deep about DeepSeek?

    Deepseek has taken the LLM world by storm, achieving parity with the latest models from OpenAI at a fraction of the…

    16 条评论
  • BertViz - Visualizing Attention in Transformers

    BertViz - Visualizing Attention in Transformers

    With the increasing use of LLMs and Transformers in organisations, users are starting to demand explainability from…

  • Buffer-of-Thought Prompting

    Buffer-of-Thought Prompting

    With use cases becoming more and more complicated and agent-based systems becoming the norm for #GenerativeAI based…

    1 条评论
  • To Embed or not to Embed ...

    To Embed or not to Embed ...

    Everyone by now, ought to be familiar with the Retrieval-Augmented Generation (RAG) approach, wherein documents or text…

  • Understanding the craft of writing

    Understanding the craft of writing

    I have never written an article about writing. Even though I have published my first novel and three more are already…

  • Generating Images with Large Language Model (GILL)

    Generating Images with Large Language Model (GILL)

    By now, we all know that LLMs work by creating embeddings of sentences in a large, multi-dimensional textual space…

    2 条评论
  • Are neural networks actually starting to replicate the functioning of the human brain?

    Are neural networks actually starting to replicate the functioning of the human brain?

    Artificial Neural Networks (ANNs), as the name suggests were patterned after the way we thought the human brain worked.…

    2 条评论
  • Claude and "Constitutional" AI

    Claude and "Constitutional" AI

    For a while now, I have been of the firm opinion that we need to build in Asimov's Three Laws of Robotics into our AI…

  • All about Chain-of-Thought (CoT)Prompting

    All about Chain-of-Thought (CoT)Prompting

    The rapidity with which LLM models have been progressing has been nothing short of stunning. The last few months have…

    5 条评论

社区洞察

其他会员也浏览了