LangChain Chains: Powering AI with Structured Execution ????

LangChain Chains: Powering AI with Structured Execution ????

When building AI-powered applications, we often need to process user inputs, format prompts, retrieve relevant data, and call AI models. Instead of manually handling these steps, LangChain chains allow us to combine multiple operations into a structured pipeline.

In this article, we’ll demystify LangChain chains using our React-based AI Joke Generator as an example. We’ll cover:

? What are LangChain chains?

? Why do we need them?

? How to create and use a chain?

? Step-by-step breakdown of how our joke generator uses a chain

By the end, you’ll understand how chains work and how they help structure AI workflows efficiently!


What Are LangChain Chains?

A chain in LangChain is a sequence of connected steps that pass data from one step to the next. Instead of handling everything manually, chains automate the process of:

? Formatting inputs (e.g., inserting a user’s message into a prompt template)

? Retrieving relevant memory (e.g., past jokes from history)

? Calling the AI model (e.g., OpenAI’s GPT-4)

? Returning a structured response

Think of a chain as an assembly line where each step builds upon the last:

?? User Input → ?? Format Prompt → ?? Retrieve Memory → ?? Call AI Model → ?? Return Response

Without chains, we’d have to manually pass data around and handle each step separately. Chains simplify AI workflows by managing these steps automatically.


How We Use a Chain in Our AI Joke Generator

Our joke generator follows this structured LangChain chain:

1?? User asks for a joke (e.g., “Tell me a dad joke about programming”).

2?? Retrieve past jokes from memory (to avoid repetition).

3?? Format the prompt using a structured template.

4?? Call OpenAI’s API with the formatted input.

5?? Store the AI’s response back into memory.

6?? Display the joke in the UI.

Let’s go step by step and see how we implemented this.


Setting Up Our Project

If you haven’t already, clone the project and switch to the updated Memory branch:

git clone -b Memory https://github.com/ranyelhousieny/LangChain_React.git
cd LangChain_ReSettingact
npm install
npm start         

Step 1: Import LangChain Components

First, we import the necessary modules for using chains in LangChain.js:

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { BufferMemory } from "langchain/memory";
        

What These Do:

?? ChatOpenAI → Calls OpenAI’s GPT model.

?? ChatPromptTemplate → Creates structured prompts with dynamic variables.

?? MessagesPlaceholder → Inserts memory (conversation history) into prompts.

?? RunnableSequence → Connects all steps into a structured chain.

?? BufferMemory → Stores previous AI-generated jokes.


Step 2: Setting Up Memory

Before creating the chain, we initialize memory to store past jokes:

const memory = new BufferMemory({
  returnMessages: true,
  memoryKey: "history",
  inputKey: "input",
});
        

How It Works:

?? returnMessages: true → Stores past jokes as structured messages.

?? memoryKey: "history" → Saves past interactions under this key.

?? inputKey: "input" → Defines where new user inputs are stored.

Why This Matters? This ensures that previous jokes are remembered and used in the next AI call, preventing repetition!


Step 3: Creating the Prompt Template

Next, we define a structured prompt template that includes both memory and user inputs:

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a comedian specialized in {style}. Keep jokes clean and family-friendly."],
  new MessagesPlaceholder("history"),
  ["human", "Tell me a {style} about {topic}. Make it unique and different from our previous jokes."]
]);
        

Breaking It Down:

?? System Message → Defines AI’s role as a joke-telling assistant.

?? MessagesPlaceholder("history") → Inserts stored jokes into the conversation.

?? User Prompt → Requests a new joke, ensuring it’s different from past jokes.

This ensures the AI remembers past jokes and generates a fresh, unique one!


Step 4: Creating the Chain

Now we combine memory, prompts, and AI processing into a single structured chain:

const chain = RunnableSequence.from([
  {
    input: (input) => input.input,
    history: async () => memory.loadMemoryVariables({}).then(vars => vars.history || []),
    style: (input) => input.style,
    topic: (input) => input.topic,
  },
  prompt,
  model,
]);
        


How This Works:

1?? Retrieve user input, style, and topic.

2?? Fetch stored joke history from memory.

3?? Pass all data into the structured prompt.

4?? Send the formatted input to OpenAI’s GPT model.

5?? Return a response and update memory.

Why Use a Chain?

? No manual data handling – everything flows automatically.

? Ensures AI uses past memory while generating new jokes.

? Simplifies AI workflow – all steps are linked together.


Understanding the Chain Structure

A RunnableSequence in LangChain is a series of steps where each step receives data, processes it, and passes it to the next step.

In our case, the chain consists of three major steps:

1?? Extract and Retrieve Data → Gather user input, memory history, and settings (style & topic).

2?? Format the Prompt → Construct a structured prompt using the extracted values.

3?? Call the AI Model → Send the formatted prompt to OpenAI and return a joke.


Breaking Down Each Step

1: Extracting User Input and Retrieving Memory

{
  input: (input) => input.input,
  history: async () => memory.loadMemoryVariables({}).then(vars => vars.history || []),
  style: (input) => input.style,
  topic: (input) => input.topic,
}
        


First: Extracts the user’s request (input.input).

Second: Loads previous jokes from memory (memory.loadMemoryVariables()).

This line retrieves conversation history from LangChain's memory system and ensures it’s available for the AI model to use. Let’s break it down in simple steps.

1 - Calling memory.loadMemoryVariables({})

memory.loadMemoryVariables({})        

?? This function fetches all stored memory variables from BufferMemory.

?? Since BufferMemory stores previous interactions, this call retrieves past jokes.

?? Gets additional context (style and topic of the joke).


2 - Handling the Response with .then(vars => vars.history || [])

.then(vars => vars.history || [])
        

?? The retrieved memory (vars) is expected to contain a history key.

?? If history exists, return it.

?? If history is empty or undefined, return an empty array ([]) to avoid errors.

This ensures that the AI always receives a valid history, even when no previous jokes exist.


3- Assigning It to the history Key

history: async () => memory.loadMemoryVariables({}).then(vars => vars.history || []),
        

?? The extracted history is stored under the key history within the RunnableSequence.

?? This makes the conversation history available to the prompt template.

Final Execution Flow

1?? User requests a new joke.

2?? Memory is checked for past jokes.

3?? Past jokes are retrieved (or an empty array is returned).

4?? The AI uses this history to avoid repetition.


Example: What Happens in Practice?

First Request (No Memory Yet)


?? Since no previous jokes exist, the AI generates a new joke freely.

Second Request (Memory Exists)

{
  "history": [
    { "role": "assistant", "content": "Why do programmers prefer dark mode? Because light attracts bugs!" }
  ]
}
        

?? The AI sees the past joke and ensures the new joke is different.

Conclusion

? Retrieves stored AI conversation history. ? Ensures past jokes are considered when generating new ones. ? Prevents errors when memory is empty by returning an empty array. ? Maintains context between multiple AI calls, making interactions more natural! ??



Diagram: How Data is Extracted

User Request  ───? Extract Input (text)
               ├──? Retrieve Memory (past jokes)
               ├──? Extract Style (e.g., "dad joke")
               └──? Extract Topic (e.g., "programming")
        


At this point, we have all the necessary details to generate a joke.


Step 2: Formatting the Prompt

prompt        


This uses ChatPromptTemplate to structure the input into a format that the AI can understand.

Example prompt before formatting:

"You are a comedian specialized in {style}. Keep jokes clean and family-friendly.
User: Tell me a {style} about {topic}. Make it unique and different from our previous jokes."
        


After formatting with real values:

"You are a comedian specialized in dad-jokes. Keep jokes clean and family-friendly.
User: Tell me a dad-joke about programming. Make it unique and different from our previous jokes."
        


Diagram: How the Prompt is Built

Extracted Data ───? Inserted into Prompt Template ───? Final AI-ready Prompt
        


This ensures the AI receives structured, contextual input.


Step 3: Calling the AI Model

model        


The formatted prompt is passed to the OpenAI GPT model via ChatOpenAI, which generates a new joke.

Example Execution:

1?? Prompt sent to AI:

"Tell me a dad-joke about programming. Make it unique and different from our previous jokes."
        


2?? AI responses:

"Why did the JavaScript developer go broke? Because he lost his cache!"
        

Diagram: AI Model Execution

Formatted Prompt ───? AI Model (GPT-4) ───? Returns New Joke
        

The AI uses past jokes from memory to ensure uniqueness before generating a new one.


Step 4: The Final Execution Flow

Now, let’s put everything together and see how the chain executes in sequence.

Complete Execution Flow

1?? User Request ───? Extract Input, History, Style, Topic ───? Format Prompt ───? AI Model ───? Generate Joke

Each step feeds data into the next step, ensuring smooth processing.

Final Diagram

User Input    ───? Extract Details ───? Format Prompt ───? AI Model ───? Response
                   ├──? Retrieve Memory (past jokes)
                   ├──? Set Style (e.g., "dad joke")
                   ├──? Set Topic (e.g., "programming")
Memory Updated  ?─── Store New Joke in Memory
        



Conclusion: Why Use RunnableSequence?

? Automates the AI workflow – No need to manually handle data.

? Ensures unique responses – AI avoids repeating past jokes.

? Keeps code modular – Each step is cleanly separated.

? Improves AI comprehension – Structured prompts enhance response quality.

RunnableSequence makes AI workflows intuitive and scalable, reducing complexity while improving performance! ??



Step 5: Storing New Jokes into Memory

Once a joke is generated, we save it back into memory so it won’t be repeated later:

await chain.memory.saveContext(
  { input: input.input },
  { output: newJoke }
);
        


This ensures that every joke is remembered, improving user experience by avoiding repetition.


Step 6: Using the Chain to Generate Jokes

Finally, we trigger the chain when the user clicks "Tell Me a Joke":

const handleTellJoke = async () => {
  try {
    const input = {
      style: style,
      topic: topic,
      input: `Generate a new ${style} about ${topic}`,
    };

    const response = await chain.invoke(input);
    const newJoke = response.content;

    // Store joke in memory
    await chain.memory.saveContext(
      { input: input.input },
      { output: newJoke }
    );

    setJoke(newJoke);
    setPreviousJokes(prev => [...prev, newJoke]);
  } catch (error) {
    console.error("Error calling OpenAI:", error);
    setJoke("Failed to get a joke. Check API key.");
  }
};
        


What Happens Here?

1?? User clicks the button to request a joke.

2?? AI retrieves memory and formats the structured prompt.

3?? AI generates a unique joke, avoiding repetition.

4?? New joke is stored in memory for future reference.

5?? UI updates with the new joke!


Final Result: Smarter AI with Chains! ??

? Automates AI workflow – no need for manual data handling.

? Ensures jokes are unique – past jokes influence future responses.

? Uses LangChain memory to maintain conversation context.

? Creates a structured, maintainable AI pipeline.

Why Chains Are Important

Without chains, we’d have to manually handle every step separately (prompt formatting, memory retrieval, AI calls, etc.).

With chains, everything is automated, structured, and reusable, making our AI system more efficient and scalable.


Next Steps: Expand the AI's Capabilities!

? Try it yourself: https://github.com/ranyelhousieny/LangChain_React/tree/Memory

? What else would you like to see? Drop your suggestions in the comments!

?? What’s the best AI-generated joke you've heard? Let me know below! ????

#LangChain #AI #ReactJS #OpenAI #MachineLearning #PromptEngineering #Memory

要查看或添加评论,请登录

Rany ElHousieny, PhD???的更多文章