Chaining Large Language Model Prompts
Cobus Greyling
Language Models, AI Agents, Agentic Applications, Development Frameworks & Data-Centric Productivity Tools
I’m currently the?Chief Evangelist ?@?HumanFirst . I explore & write about all things at the intersection of AI & language; ranging from?LLMs ,?Chatbots ,?Voicebots , Development Frameworks,?Data-Centric latent spaces ?and more.
This article considers some of the advantages and challenges of Prompt Chaining in the context of LLMs.
What is prompt chaining?
Prompt?Chaining,?also referred to as?Large Language Model (LLM) ?Chainingis the notion of creating a chain consisting of a series of model calls. This series of calls follow on each other with the output of one chain serving as the input of another.
Each?chain?is intended to target small and well scoped sub-tasks, hence a single LLM is used to address multiple sequenced sub-components of a task.
In?essence prompt chaining leverages a key principle in prompt engineering, known as?chain of thought prompting .
The principle of?Chain of Thought ?prompting is not?only?used in chaining, but also in?Agents ?and?Prompt Engineering .
Chain of thought prompting is the notion of decomposing a complex task into refined smaller tasks, building up to the final answer.
Transparency, Controllability & Observability
There is a need to for LLMs to address and solve for ambitious and complex tasks. For instance, consider the following question:
List five people of notoriety which were born in the same year as the person regarded as the father of the iPhone?
领英推荐
When answering this question, we expect the LLM to decompose the question and supply a chain of thought or reasoning on how the answer was reached for this more complex task.
Considering the image above, the principle of chaining is illustrated. [A] is a direct instruction via a prompt. While [B] is where chaining principles are used. The task is decomposed into sub-tasks using a chain-of-thought process, with ideation and culminating in an improved output.
Prompt Chaining not only improves the quality of task outcomes, but also introduces system transparency, controllability and a sense of collaboration.
Stephen Broadhurst ?recently presented a talk on supervision and observability from a LLM perspective. You can read more about the basic principles?here .
Users also become more familiar with LLM behaviour by considering the output from subtasks and calibrating their chains in such a way to reach desired expectations. LLM development becomes more observable as alternative chains can be contrasted against each-other and downstream results compared.
Prompt Drift
Considering that each chain’s input is dependant on the output of a preceding chain…the danger exist of prompt drift. Hence there can be?driftor cascading of errors or inaccuracies as the process flows from chain to chain.
Prompt drift can also be introduced when changes to prompt wording upstream causes unintended drift in downstream results. A small deviation introduced upstream will grow downstream, with the deviation being exacerbated with each chain.
AI Innovation Catalyst | Author | Entrepreneur | Product & Growth Strategist | Bridging Generative AI and Business Value
1 年Great post, Cobus Greyling! Prompt chaining is indeed an interesting concept in the context of Large Language Models (LLMs). It allows us to break down complex tasks into smaller, more manageable sub-tasks, leading to more efficient and accurate results. By leveraging a chain of thought prompting, we can effectively address multiple sequenced sub-components of a task using a single LLM. This not only enhances productivity but also improves the overall user experience. Prompt chaining has the potential to revolutionize the field of NLP/NLU by enhancing the capabilities of chatbots, voicebots, CCAI, and ambient orchestration. It opens doors for the development of ubiquitous user interfaces that seamlessly interact with users, offering a high level of personalization and convenience. However, as with any innovation, there are challenges to consider. Ensuring smooth interoperability among different model chains and maintaining consistency throughout the prompt chaining process might require thoughtful engineering and optimization. Exciting times, indeed! Looking forward to witnessing the advancements and real-world applications that emerge from further exploration of prompt chaining. Thank you for sharing this insightful article.