LangChain Templates - Turbocharging AI Development
Now quite a while ago – Langchain released “LangChain templates” which I think are going to revolutionize the AI app development is going to shape.
...But what really are LangChain Templates?
These templates provide a collection of pre-configured architectures that streamline the process of deploying AI applications. They are designed to be easily deployable, ensuring that developers can quickly move from concept to production.
Key Advantages that I see:
...but what's causing so much confusion around their usage in LangChain fraternity?
The tricky bit here is the numerous customizations around the way they could be leveraged.
I explored this area into deep detail and taking one example templates - I would try to explain the complete end to end process of leveraging a template in 4 different ways to bring up your AI application.
I've as well included a link to my github repo at the end of this article:
...Let's dive in!
Setting up the Folder structure
Create a folder let's say "LangChain-Templates".
Create Conda environment:
conda create -p venv python==3.10
Install LangChain CLI
To use this package, you should first have the LangChain CLI installed. This is the official command line interface that we are going to interact with LangChain and LangServe.
pip install -U langchain-cli
I would usually almost always install python-dotenv package that is going to be used at some point to import the environment variables into the python code.
pip install python-dotenv
Install the required LangChain package
For the scope of this tutorial - I am going to use "pirate-speak" template which I am going to use to create a chatbot that speak in a bad-ass Pirate English.
langchain app new my-app --package pirate-speak
Configure the package
At this point you'll need to create routes for your application to access the "Chains" in the template:
..so let's add them to the file server.py what would be located inside your package.
..\pirate-speak\my-app\app\server.py
And add the following code to your server.py file:
from pirate_speak.chain import chain as pirate_speak_chain
add_routes(app, pirate_speak_chain, path="/pirate-speak")
Import the env variables into the server.py
import os
from dotenv import load_dotenv
load_dotenv()
..At this point we fire up Spin Up the Langserve
LangChain-Templates/pirate-speak/my-app
langchain serve
If you have been following me so far - you should be having your FastAPI app running locally:
You can confirm this by accessing the templates at
...Let the fun begin!
There are primarily 4 ways that I am going to cover regarding the usage of this template now.
Method-1: Using the Fast API
5. Press "Execute".
6. You'll see the response like:
Method-2: Using the PlayGround
You can access the playground at https://127.0.0.1:8000/pirate-speak/playground
Method-3: Accessing the template from code with fixed input embedded in form of a json:
To access the template using python code and create applications. We need to create a python file containing the implementation of our app.
Here, I used a RemoteRunnable object from the langserve library which is configured to communicate with LangServe server then sends input data containing chat history and a small string for processing.
This invokes the runnable's functionality, and prints the result returned.
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("https://localhost:8000/pirate-speak")
input_data= {
"chat_history": [
],
"text": "Hey there How Are you? How are things up with you?"
}
print(runnable.invoke(input_data))
You should see an output like this on invoking the python application:
Method-3: Accessing the template from code processing a text input from the user:
Here is have defined a python function get_pirate_response that sends a user-inputted text string to the application server endpoint for translation the given text into pirate english. It then retrieves and returns the translated response.
import requests
def get_pirate_response(user_input):
response=requests.post(
"https://localhost:8000/pirate-speak/invoke",
json={
"input": {
"chat_history": [
],
"text": user_input
},
"config": {},
"kwargs": {}
})
return response.json()['output']['content']
user_input = input("Enter a string: ")
print(get_pirate_response(user_input))
I had kept the best chunk for the last!!
Method-4: Accessing the template from a UI based application that uses StreamLit for front end.
Install streamlit in your conda environment:
pip install streamlit
Here I have created a simple Streamlit application where a user can input a string.
Upon submitting the input, it invokes the get_pirate_response function to send the input to the server endpoint for translation into pirate language using HTTP POST request.
The response is then displayed in the Streamlit app.
import requests
import streamlit as st
def get_pirate_response(user_input):
response=requests.post(
"https://localhost:8000/pirate-speak/invoke",
json={
"input": {
"chat_history": [
],
"text": user_input
},
"config": {},
"kwargs": {}
})
return response.json()['output']['content']
st.title("Welcome to Pirate LangChain App?")
user_input = st.text_input("Enter a string: ")
if user_input:
response=get_pirate_response(user_input)
st.write(response)
Run the streamlit app:
streamlit run app_user-input-streamlit.py
The streamlit application would spinup at this point:
Try out a user input:
So we see how efficient these LangChain Templates can be.
In this rapidly evolving AI development landscape - LangChain Templates stand out as huge innovation, offering the AI development fraternity a streamlined path to create production quality AI apps.
As we’ve explored throughout this article, their integration with FastAPI paves the way for efficiency and scalability.
As we conclude this this discussion, I hope the insights I shared here spark a new perspective on the potential of LangChain Templates in your AI app development journey.
I’m Rohit, an R&D Manager - insanely passionate about AI/ML Innovations.
Please feel free to connect if you’re intrigued by the vastness and possibilities in LangChain Templates or have insights to share.
Let's learn together and push the boundaries of what’s possible in AI development.