LangChain Memory in a React AI Joke Generator: A Beginner’s Guide ????

LangChain Memory in a React AI Joke Generator: A Beginner’s Guide ????

Wouldn’t it be cool if your AI remembered what it told you before? Imagine asking an AI for a joke, and instead of repeating the same ones, it keeps track of past responses to ensure every joke is fresh and unique.

With LangChain’s memory capabilities, we can do exactly that! In this tutorial, we’ll upgrade our LangChain-powered AI Joke Generator to:

? Store previous jokes so the AI doesn’t repeat itself

? Use LangChain’s BufferMemory to track conversation history

? Display a list of past AI-generated jokes in React

This guide is perfect for absolute beginners—so don’t worry if you’ve never worked with AI memory before. We’ll go step by step to make it easy!



Understanding Memory in LangChain

AI models are stateless by default, meaning they forget past interactions. Memory modules in LangChain allow AI to maintain conversation history across multiple exchanges.

Types of Memory in LangChain

  1. BufferMemory: Stores previous conversations in a buffer.
  2. WindowMemory: Keeps only the last N interactions.
  3. SummaryMemory: Summarizes past interactions to conserve space.

In our case, we used BufferMemory to store previous jokes.


What is AI Memory & Why Does It Matter?

? Without Memory

  • AI forgets past responses and might repeat itself.
  • Every interaction is treated as a fresh conversation.

? With Memory (LangChain BufferMemory)

  • AI remembers previous responses and avoids repetition.
  • Maintains context-awareness in multi-turn conversations.
  • Enhances user experience by providing smarter responses.

?? Real-World Use Cases:

  • Chatbots that remember past conversations
  • AI assistants that track user preferences
  • Personalized AI applications that adapt over time


Understanding the BufferMemory Initialization in LangChain

Import Required Memory Components

import { BufferMemory } from "langchain/memory";        

  • BufferMemory → Stores conversation history.

The following code snippet initializes LangChain's memory system using BufferMemory to keep track of past interactions:

// Initialize memory
const memory = new BufferMemory({
  returnMessages: true,
  memoryKey: "history",
  inputKey: "input",
});
        

Breaking it Down:

  1. BufferMemory: This is a short-term memory store that holds the conversation history, allowing the AI to recall past interactions.
  2. returnMessages: true:This ensures that the stored history is returned as a list of structured messages, rather than just plain text.Each message will have a role (user, assistant, system) and content.
  3. memoryKey: "history":This defines where the conversation history is stored."history" is the key that will hold previous exchanges.When the AI generates a response, it will access this key to retrieve past jokes or interactions.
  4. inputKey: "input":This tells LangChain where to store new user inputs in memory."input" represents what the user asks, ensuring that each new interaction is properly recorded.

How It Works in Practice:

  • Every time the user requests a new joke, the AI retrieves the history stored in BufferMemory.
  • It uses the stored jokes as context to avoid repeating old ones.
  • Once the AI generates a joke, it saves the new joke back into history using:

await memory.saveContext({ input: "Tell me a joke" }, { output: "Why did the computer break up with the internet? It couldn't handle the bandwidth!" });
        

  • Next time the user asks for a joke, the AI references past stored jokes before generating a new one.

Key Benefits of Using BufferMemory:

? Prevents repetition – AI won’t tell the same joke twice.

? Maintains context – Past responses influence new ones.

? Enhances user experience – Conversations feel more natural and intelligent.

This simple memory setup transforms the AI from a static joke generator to a dynamic, context-aware assistant! ??


Step 1: Setting Up Our Project

If you haven’t already, clone the project and switch to the updated Memory branch:

git clone -b Memory https://github.com/ranyelhousieny/LangChain_React.git
cd LangChain_React
npm install
npm start
        

If you’re starting from scratch, install the required dependencies:

npm install @mui/material @langchain/openai @langchain/core @langchain/memory @langchain/chains
        


This installs:

  • LangChain.js → Manages AI interactions.
  • Material-UI → For clean UI design.
  • LangChain Memory → Stores and tracks previous jokes.


Step 2: Integrating Memory into LangChain

We’ll modify App.js to:

? Store past jokes and prevent repetition.

? Use LangChain’s memory features for context retention.


1?? Import Required Memory Components

import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { BufferMemory } from "langchain/memory";

        

  • ChatPromptTemplate → Defines structured AI prompts.
  • MessagesPlaceholder → Inserts memory into the conversation.
  • RunnableSequence → Creates a chain that combines memory, prompts, and AI models.
  • BufferMemory → Stores conversation history.


Setting Up Memory When the API Key is Entered

useEffect(() => {
  if (apiKey) {
    const model = new ChatOpenAI({
      openAIApiKey: apiKey,
      modelName: "gpt-3.5-turbo", // Using a reliable model
      temperature: 0.9,
    });

    // Create a prompt template that includes memory context
    const prompt = ChatPromptTemplate.fromMessages([
      ["system", "You are a comedian specialized in {style}. Keep jokes clean and family-friendly."],
      new MessagesPlaceholder("history"),
      ["human", "Tell me a {style} about {topic}. Make it unique and different from our previous jokes."]
    ]);

    // Initialize memory
    const memory = new BufferMemory({
      returnMessages: true,
      memoryKey: "history",
      inputKey: "input",
    });

    // Create a chain that manages memory and responses
    const chain = RunnableSequence.from([
      {
        input: (input) => input.input,
        history: async () => memory.loadMemoryVariables({}).then(vars => vars.history || []),
        style: (input) => input.style,
        topic: (input) => input.topic,
      },
      prompt,
      model,
    ]);

    setChain({ chain, memory });
  }
}, [apiKey]);
        

?? How This Works:

1?? AI initializes when the API key is entered.

2?? BufferMemory tracks previous jokes and stores them under "history".

3?? RunnableSequence links memory with AI for smarter responses.


Step 3: Using Memory to Generate Jokes

We update handleTellJoke() to:

? Retrieve previous jokes from memory.

? Ensure AI-generated jokes are unique.

? Store new jokes in memory after generating them.

const handleTellJoke = async () => {
  if (!apiKey || !chain) {
    alert("Please enter your OpenAI API key.");
    return;
  }

  try {
    const input = {
      style: style,
      topic: topic,
      input: `Generate a new ${style} about ${topic}`,
    };

    const response = await chain.chain.invoke(input);
    const newJoke = response.content;

    // Store in memory
    await chain.memory.saveContext(
      { input: input.input },
      { output: newJoke }
    );

    setJoke(newJoke);
    setPreviousJokes(prev => [...prev, newJoke]);
  } catch (error) {
    console.error("Error calling OpenAI:", error);
    setJoke("Failed to get a joke. Check API key.");
  }
};
        


?? How This Works:

? AI retrieves past jokes from memory before generating a new one. ? AI saves the new joke in memory so it remembers it next time. ? The UI updates with the new joke while storing old ones.


Let's go through a run to understand how it works:

Here is the run with 3 previous jokes and one new joke


As you can see from the left-hand side, all the previous jokes are added and we asked the LLM to avoid repeating from the past. All of this in the following prompt template:


As you can see from the run, it included all the previous chat:


1 - Creating Memory-Aware Prompts

We created a structured prompt template that includes memory.

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a comedian specialized in {style}. Keep jokes clean and family-friendly."],
  new MessagesPlaceholder("history"),
  ["human", "Tell me a {style} about {topic}. Make it unique and different from our previous jokes."]
]);
        


Breaking it Down:

  1. System Role: Defines the AI's persona as a joke-telling assistant.
  2. MessagesPlaceholder("history"): Inserts past interactions from memory.
  3. User Prompt: Asks for a new joke while instructing AI not to repeat past jokes.

This ensures the AI uses previously generated jokes as context when creating new ones.


2 - Storing and Retrieving Memory

When a user requests a joke, we do the following:

  1. Load previous jokes from memory
  2. Pass them into the LLM as part of the prompt
  3. Ask for a new, unique joke
  4. Save the new joke back into memory

const chain = RunnableSequence.from([
  {
    input: (input) => input.input,
    history: async () => memory.loadMemoryVariables({}).then(vars => vars.history || []),
    style: (input) => input.style,
    topic: (input) => input.topic,
  },
  prompt,
  model,
]);
        


What’s Happening Here?

  • Retrieve memory: memory.loadMemoryVariables({}) fetches stored jokes.
  • Pass it to the prompt: MessagesPlaceholder("history") inserts this into the conversation.
  • Ensure uniqueness: The AI sees the previous jokes and avoids repeating them.


3 - Storing New Jokes into Memory

Once a new joke is generated, we save it into the memory buffer:

await chain.memory.saveContext(
  { input: input.input },
  { output: newJoke }
);
        


This allows our AI to recall old jokes in the next interaction, ensuring they don’t get repeated.


Running the Program

Example Run

As seen in the debugging screenshot, when we ask for a joke multiple times:

  • The AI remembers past jokes.
  • The conversation history is sent to OpenAI.
  • The model creates a new joke based on the history.

In this case:

  1. The first request asked for a joke about programming.
  2. The AI provided “Why do programmers prefer dark mode? Because light attracts bugs!”.
  3. The second request included the previous joke in history.
  4. The AI ensured the new joke was unique.


Step 4: Displaying Past Jokes in the UI

To show previous AI-generated jokes, we display them in a scrollable list:

{previousJokes.length > 0 && (
  <Box marginTop="20px">
    <Typography variant="h6">Previous Jokes:</Typography>
    <Paper elevation={2} style={{ maxHeight: '200px', overflow: 'auto' }}>
      <List>
        {previousJokes.slice(0, -1).reverse().map((prevJoke, index) => (
          <ListItem key={index} divider>
            <ListItemText 
              primary={prevJoke}
              secondary={`Joke #${previousJokes.length - 1 - index}`}
            />
          </ListItem>
        ))}
      </List>
    </Paper>
  </Box>
)}
        

?? Why This Is Useful:

? Users see past AI-generated jokes in a list.

? Prevents AI from repeating old jokes.

? Displays jokes in newest-first order for easy reference.



Final Result: AI with Memory! ??

? Uses GPT-3.5-Turbo for fast, efficient responses. ? Remembers past jokes and avoids repetition. ? Stores AI responses in memory using LangChain’s BufferMemory. ? Displays joke history for better user interaction.

?? Congratulations! You’ve built an AI that remembers!


What’s Next?

?? Enhancing AI memory even further! Next, we’ll explore long-term memory, allowing AI to remember past interactions across sessions.

?? Try it now:

git clone -b Memory https://github.com/ranyelhousieny/LangChain_React.git
cd LangChain_React
npm install
npm start
        

?? What’s the funniest AI-generated joke you’ve heard? Drop it in the comments! ????

#AI #LangChain #OpenAI #Memory #MachineLearning #ReactJS #PromptEngineering


要查看或添加评论,请登录

Rany ElHousieny, PhD???的更多文章