LangChain Memory in a React AI Joke Generator: A Beginner’s Guide ????
Rany ElHousieny, PhD???
Generative AI ENGINEERING MANAGER | ex-Microsoft | AI Solutions Architect | Generative AI & NLP Expert | Proven Leader in AI-Driven Innovation | Former Microsoft Research & Azure AI | Software Engineering Manager
Wouldn’t it be cool if your AI remembered what it told you before? Imagine asking an AI for a joke, and instead of repeating the same ones, it keeps track of past responses to ensure every joke is fresh and unique.
With LangChain’s memory capabilities, we can do exactly that! In this tutorial, we’ll upgrade our LangChain-powered AI Joke Generator to:
? Store previous jokes so the AI doesn’t repeat itself
? Use LangChain’s BufferMemory to track conversation history
? Display a list of past AI-generated jokes in React
This guide is perfect for absolute beginners—so don’t worry if you’ve never worked with AI memory before. We’ll go step by step to make it easy!
Understanding Memory in LangChain
AI models are stateless by default, meaning they forget past interactions. Memory modules in LangChain allow AI to maintain conversation history across multiple exchanges.
Types of Memory in LangChain
In our case, we used BufferMemory to store previous jokes.
What is AI Memory & Why Does It Matter?
? Without Memory
? With Memory (LangChain BufferMemory)
?? Real-World Use Cases:
Understanding the BufferMemory Initialization in LangChain
Import Required Memory Components
import { BufferMemory } from "langchain/memory";
The following code snippet initializes LangChain's memory system using BufferMemory to keep track of past interactions:
// Initialize memory
const memory = new BufferMemory({
returnMessages: true,
memoryKey: "history",
inputKey: "input",
});
Breaking it Down:
How It Works in Practice:
await memory.saveContext({ input: "Tell me a joke" }, { output: "Why did the computer break up with the internet? It couldn't handle the bandwidth!" });
Key Benefits of Using BufferMemory:
? Prevents repetition – AI won’t tell the same joke twice.
? Maintains context – Past responses influence new ones.
? Enhances user experience – Conversations feel more natural and intelligent.
This simple memory setup transforms the AI from a static joke generator to a dynamic, context-aware assistant! ??
Step 1: Setting Up Our Project
If you haven’t already, clone the project and switch to the updated Memory branch:
git clone -b Memory https://github.com/ranyelhousieny/LangChain_React.git
cd LangChain_React
npm install
npm start
If you’re starting from scratch, install the required dependencies:
npm install @mui/material @langchain/openai @langchain/core @langchain/memory @langchain/chains
This installs:
Step 2: Integrating Memory into LangChain
We’ll modify App.js to:
? Store past jokes and prevent repetition.
? Use LangChain’s memory features for context retention.
1?? Import Required Memory Components
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { BufferMemory } from "langchain/memory";
Setting Up Memory When the API Key is Entered
useEffect(() => {
if (apiKey) {
const model = new ChatOpenAI({
openAIApiKey: apiKey,
modelName: "gpt-3.5-turbo", // Using a reliable model
temperature: 0.9,
});
// Create a prompt template that includes memory context
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a comedian specialized in {style}. Keep jokes clean and family-friendly."],
new MessagesPlaceholder("history"),
["human", "Tell me a {style} about {topic}. Make it unique and different from our previous jokes."]
]);
// Initialize memory
const memory = new BufferMemory({
returnMessages: true,
memoryKey: "history",
inputKey: "input",
});
// Create a chain that manages memory and responses
const chain = RunnableSequence.from([
{
input: (input) => input.input,
history: async () => memory.loadMemoryVariables({}).then(vars => vars.history || []),
style: (input) => input.style,
topic: (input) => input.topic,
},
prompt,
model,
]);
setChain({ chain, memory });
}
}, [apiKey]);
?? How This Works:
1?? AI initializes when the API key is entered.
2?? BufferMemory tracks previous jokes and stores them under "history".
3?? RunnableSequence links memory with AI for smarter responses.
Step 3: Using Memory to Generate Jokes
We update handleTellJoke() to:
? Retrieve previous jokes from memory.
? Ensure AI-generated jokes are unique.
? Store new jokes in memory after generating them.
const handleTellJoke = async () => {
if (!apiKey || !chain) {
alert("Please enter your OpenAI API key.");
return;
}
try {
const input = {
style: style,
topic: topic,
input: `Generate a new ${style} about ${topic}`,
};
const response = await chain.chain.invoke(input);
const newJoke = response.content;
// Store in memory
await chain.memory.saveContext(
{ input: input.input },
{ output: newJoke }
);
setJoke(newJoke);
setPreviousJokes(prev => [...prev, newJoke]);
} catch (error) {
console.error("Error calling OpenAI:", error);
setJoke("Failed to get a joke. Check API key.");
}
};
?? How This Works:
? AI retrieves past jokes from memory before generating a new one. ? AI saves the new joke in memory so it remembers it next time. ? The UI updates with the new joke while storing old ones.
Let's go through a run to understand how it works:
Here is the run with 3 previous jokes and one new joke
As you can see from the left-hand side, all the previous jokes are added and we asked the LLM to avoid repeating from the past. All of this in the following prompt template:
As you can see from the run, it included all the previous chat:
1 - Creating Memory-Aware Prompts
We created a structured prompt template that includes memory.
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a comedian specialized in {style}. Keep jokes clean and family-friendly."],
new MessagesPlaceholder("history"),
["human", "Tell me a {style} about {topic}. Make it unique and different from our previous jokes."]
]);
Breaking it Down:
This ensures the AI uses previously generated jokes as context when creating new ones.
2 - Storing and Retrieving Memory
When a user requests a joke, we do the following:
const chain = RunnableSequence.from([
{
input: (input) => input.input,
history: async () => memory.loadMemoryVariables({}).then(vars => vars.history || []),
style: (input) => input.style,
topic: (input) => input.topic,
},
prompt,
model,
]);
What’s Happening Here?
3 - Storing New Jokes into Memory
Once a new joke is generated, we save it into the memory buffer:
await chain.memory.saveContext(
{ input: input.input },
{ output: newJoke }
);
This allows our AI to recall old jokes in the next interaction, ensuring they don’t get repeated.
Running the Program
Example Run
As seen in the debugging screenshot, when we ask for a joke multiple times:
In this case:
Step 4: Displaying Past Jokes in the UI
To show previous AI-generated jokes, we display them in a scrollable list:
{previousJokes.length > 0 && (
<Box marginTop="20px">
<Typography variant="h6">Previous Jokes:</Typography>
<Paper elevation={2} style={{ maxHeight: '200px', overflow: 'auto' }}>
<List>
{previousJokes.slice(0, -1).reverse().map((prevJoke, index) => (
<ListItem key={index} divider>
<ListItemText
primary={prevJoke}
secondary={`Joke #${previousJokes.length - 1 - index}`}
/>
</ListItem>
))}
</List>
</Paper>
</Box>
)}
?? Why This Is Useful:
? Users see past AI-generated jokes in a list.
? Prevents AI from repeating old jokes.
? Displays jokes in newest-first order for easy reference.
Final Result: AI with Memory! ??
? Uses GPT-3.5-Turbo for fast, efficient responses. ? Remembers past jokes and avoids repetition. ? Stores AI responses in memory using LangChain’s BufferMemory. ? Displays joke history for better user interaction.
?? Congratulations! You’ve built an AI that remembers!
What’s Next?
?? Enhancing AI memory even further! Next, we’ll explore long-term memory, allowing AI to remember past interactions across sessions.
?? Try it now:
git clone -b Memory https://github.com/ranyelhousieny/LangChain_React.git
cd LangChain_React
npm install
npm start
?? What’s the funniest AI-generated joke you’ve heard? Drop it in the comments! ????
#AI #LangChain #OpenAI #Memory #MachineLearning #ReactJS #PromptEngineering