Chaining Large Language Model Prompts
Chaining Large Language Model Prompts

Chaining Large Language Model Prompts

I’m currently the?Chief Evangelist ?@?HumanFirst . I explore & write about all things at the intersection of AI & language; ranging from?LLMs ,?Chatbots ,?Voicebots , Development Frameworks,?Data-Centric latent spaces ?and more.

This article considers some of the advantages and challenges of Prompt Chaining in the context of LLMs.

What is prompt chaining?

Prompt?Chaining,?also referred to as?Large Language Model (LLM) ?Chainingis the notion of creating a chain consisting of a series of model calls. This series of calls follow on each other with the output of one chain serving as the input of another.

Each?chain?is intended to target small and well scoped sub-tasks, hence a single LLM is used to address multiple sequenced sub-components of a task.

In?essence prompt chaining leverages a key principle in prompt engineering, known as?chain of thought prompting .

The principle of?Chain of Thought ?prompting is not?only?used in chaining, but also in?Agents ?and?Prompt Engineering .

Chain of thought prompting is the notion of decomposing a complex task into refined smaller tasks, building up to the final answer.

Transparency, Controllability & Observability

There is a need to for LLMs to address and solve for ambitious and complex tasks. For instance, consider the following question:

List five people of notoriety which were born in the same year as the person regarded as the father of the iPhone?

When answering this question, we expect the LLM to decompose the question and supply a chain of thought or reasoning on how the answer was reached for this more complex task.

No alt text provided for this image

Considering the image above, the principle of chaining is illustrated. [A] is a direct instruction via a prompt. While [B] is where chaining principles are used. The task is decomposed into sub-tasks using a chain-of-thought process, with ideation and culminating in an improved output.

Prompt Chaining not only improves the quality of task outcomes, but also introduces system transparency, controllability and a sense of collaboration.

Stephen Broadhurst ?recently presented a talk on supervision and observability from a LLM perspective. You can read more about the basic principles?here .

Users also become more familiar with LLM behaviour by considering the output from subtasks and calibrating their chains in such a way to reach desired expectations. LLM development becomes more observable as alternative chains can be contrasted against each-other and downstream results compared.

Prompt Drift

Considering that each chain’s input is dependant on the output of a preceding chain…the danger exist of prompt drift. Hence there can be?driftor cascading of errors or inaccuracies as the process flows from chain to chain.

No alt text provided for this image

Prompt drift can also be introduced when changes to prompt wording upstream causes unintended drift in downstream results. A small deviation introduced upstream will grow downstream, with the deviation being exacerbated with each chain.

Gilbert Mizrahi

AI Innovation Catalyst | Author | Entrepreneur | Product & Growth Strategist | Bridging Generative AI and Business Value

1 年

Great post, Cobus Greyling! Prompt chaining is indeed an interesting concept in the context of Large Language Models (LLMs). It allows us to break down complex tasks into smaller, more manageable sub-tasks, leading to more efficient and accurate results. By leveraging a chain of thought prompting, we can effectively address multiple sequenced sub-components of a task using a single LLM. This not only enhances productivity but also improves the overall user experience. Prompt chaining has the potential to revolutionize the field of NLP/NLU by enhancing the capabilities of chatbots, voicebots, CCAI, and ambient orchestration. It opens doors for the development of ubiquitous user interfaces that seamlessly interact with users, offering a high level of personalization and convenience. However, as with any innovation, there are challenges to consider. Ensuring smooth interoperability among different model chains and maintaining consistency throughout the prompt chaining process might require thoughtful engineering and optimization. Exciting times, indeed! Looking forward to witnessing the advancements and real-world applications that emerge from further exploration of prompt chaining. Thank you for sharing this insightful article.

要查看或添加评论,请登录

Cobus Greyling的更多文章

  • Eight Prompt Engineering Implementations [Updated]

    Eight Prompt Engineering Implementations [Updated]

    In essence the discipline of Prompt Engineering is very simple and accessible. But as the LLM landscape develops…

    2 条评论
  • LangSmith

    LangSmith

    I was fortunate to get early access to the LangSmith platform and in this article you will find practical code examples…

  • NLU Remains Relevant For Conversational AI

    NLU Remains Relevant For Conversational AI

    I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and…

    2 条评论
  • Advances In Conversational Dialog State Management

    Advances In Conversational Dialog State Management

    Any Conversational User Interface needs to perform dialog state management, determining what the next dialog state &…

    9 条评论
  • ChatGPT Custom Instructions

    ChatGPT Custom Instructions

    By asking GPT-4 to generated a response in the same way the new feature called Custom Instructions does, improves the…

    8 条评论
  • A Hands-On Analysis Of The LLM Tooling Landscape (Part 1)

    A Hands-On Analysis Of The LLM Tooling Landscape (Part 1)

    The Generative AI App tooling landscape is settling into a few categories and adopting a number of general principles…

    7 条评论

社区洞察

其他会员也浏览了