Turn your Langflow Prototype into a Streamlit Chatbot Application
Image by Midjourney

Turn your Langflow Prototype into a Streamlit Chatbot Application


Learn how to turn your Langflow flow into a fully-functional Streamlit-based conversational chatbot application


Introduction

According to its creator, LogSpace, a software company that provides customized Machine Learning services, Langflow is a web-based UI for LangChain, designed with react-flow to provide an effortless way to experiment and prototype flows. LangChain, created by Harrison Chase, is a wildly popular framework for developing applications powered by large language models (LLMs).

Langflow allows users to quickly develop simple to complex LangChain prototypes without any required coding, truly democratizing LLMs access. According to the README, “Creating flows with Langflow is easy. Simply drag sidebar components onto the canvas and connect them to create your pipeline. Langflow provides a range of LangChain components, including LLMs, prompt serializers, agents, and chains.”

No alt text provided for this image
Typical Langflow flow using a range of LangChain components

Flowise

Flowise, created by FlowiseAI, is an alternative to Langflow. Each offers nearly identical features. Flowise, according to the website, is an open-source UI visual tool to build your customized LLM flow using LangchainJS, written in Node TypeScript/JavaScript and also created by Harrison Chase. As of July 2023, both projects have nearly identical stars on GitHub, ~11k. However, Flowise does have over 2.5x the number of forks as Langflow, 3.9k to 1.4k. You can find many articles and YouTube videos comparing the two tools.

Turning Flows into Applications

Although it excels at no-code experimentation and prototyping, Langflow’s documentation lacks details on turning a LangChain prototype (aka flow) into a standalone application, such as a Chatbot. The documentation simply states, “Once you’re done, you can export your flow as a JSON file to use with LangChain. To do so, click the Export button in the top right corner of the canvas; then, in Python, you can load the flow with”:

from langflow import load_flow_from_json

flow = load_flow_from_json("path/to/flow.json")
# Now you can use it like any chain
flow("Hey, have you heard of Langflow?")        

By running this code, you will get a response from the flow in the form of a JSON payload, similar to the following:

{
    "result": {
        "text": "Yes, I am familiar with Langflow. It is an effortless way to experiment and prototype?LangChain?pipelines."
    }
}        

So, how do we go from a Langflow flow to a full-fledged chatbot? In this post, we will see how to turn a Langflow flow into a Streamlit application. Streamlit, like LangChain, is an extremely popular framework for building Python applications that connect to LLMs. Their website states, “Streamlit is a faster way to build and share data apps. Streamlit turns data scripts into shareable web apps in minutes. All in pure Python. No front?end experience required.”

Cocktails Anyone?

I’ve created a personal conversational chatbot to answer cocktail-related questions for demonstration purposes. The chatbot’s knowledge of cocktails will come from the digital version the 2019 mixology guide, The NoMad Cocktail Book, available in hardcover on Amazon, by Leo Robitschek. It makes a great gift! If this book is not to your taste, many similar guides are available online or perhaps choose another topic, like cooking, golf, or gardening. The main advantage of this PDF is that it contains a single column of text, making it easily splittable (see embeddings topic later in the post). Based on my research, some PDFs have multiple columns per page, which are not easily splittable and will negatively impact the accuracy of the chatbot’s responses (e.g., Top 50 Cocktails, by Nick Wilkins — another great cocktail guide).

No alt text provided for this image
Preview of this post’s conversational chatbot using Streamlit

Langflow uses LangChain components. Our chatbot starts with the ConversationalRetrievalQA chain, ConversationalRetrievalChain, which builds on RetrievalQAChain to provide a chat history component. It first combines the chat history (either explicitly passed in or retrieved from the provided memory) and the question, then looks up relevant documents from the retriever and finally passes those documents and the question (aka prompt) to a question answering chain to return a response. This process is referred to as Retrieval Augmented Generation (RAG).

The chatbot uses LangChain’s PDF document loader, PyPDFLoader, recursive text splitter, RecursiveCharacterTextSplitter, OpenAI’s text embeddings, OpenAIEmbeddings, and Chroma vector store through LangChain’s vectorstores module. The PDF file is split into individual documents by the RecursiveCharacterTextSplitter. Documents are then converted to vector embeddings by the OpenAIEmbeddings, which creates vector representations (lists of numbers) of a piece of text. The embeddings are stored in Chroma, an AI-native open-source embedding database. Using a semantic similarity search, relevant documents are retrieved based on the end-users question and passed to a question answering chain, along with the question to return a response.

Langflow Flow

The conversation chatbot has two flows. The first flow handles text splitting, embedding, and uploading to the vector store. This is a one-time process; there is no need to incur charges to create embeddings every time we run the chatbot.

No alt text provided for this image
Flow 1 of 2: Text splitting, embedding, and uploading to vector store

The second flow retrieves the relevant documents from the vector database, manages the buffer memory, and accepts the prompt from the end-user for the conversational chat interface.

No alt text provided for this image
Flow 2 of 2: Document retrieval, buffer memory, and chat interface

Langflow provides a basic chat user interface intended for experimentation and prototyping. We can use the interface to test our conversation chatbot flow.

No alt text provided for this image
Langflow’s chat interface

Testing the Retrieval Process

LLMs have broad general knowledge. Therefore, the chatbot could return reasonable answers to some of our questions without direct knowledge of the PDF’s content. To test if the document retrieval process is working, we can compare the output from a basic chatbot flow without retrieval capabilities to our chatbot with retrieval capabilities. We can ask questions requiring specific knowledge in the PDF, such as “Who was the creator of the Summer of Yes cocktail?,” which conveniently happens to be the book’s author, Leo Robitschek.

Below is an example of a conversational chatbot flow without retrieval capabilities. The chatbot is unable to answer a series of PDF-content-specific questions correctly. The chatbot fails to return answers or returns incorrect answers.

No alt text provided for this image
Chatbot flow without retrieval capabilities

Conversely, below, we see output from our conversational chatbot with retrieval capabilities. The chatbot correctly answered the questions, given its firsthand knowledge of the PDF’s content using the RAG method.

No alt text provided for this image
Chatbot flow with retrieval capabilities

Calling the Langflow Chatbot Externally

Langflow provides sample code to call your flow programmatically using cURL or Python. Each flow has a unique FLOW_ID, required to reach the correct Langflow flow.

No alt text provided for this image
Langflow sample code to call the chatbot flow

Executing the cURL command or Python code, as is, returns a JSON response payload from the flow running on Langflow. As shown below (page content abridged), the response payload contains the LLM’s answer to the end-users question, along with relevant documents that were semantically similar to the question, pulled from the vector database, and used by the LLM to compose the answer.

{
    "result": {
        "answer": "The creator of the Summer of Yes cocktail is Leo Robitschek.",
        "source_documents": [
            {
                "page_content": "SUMMER OF YES\nGLASS:\n Highball\nICE:\n 1?-inch cubes\nCREATOR:\n Leo Robitschek\nA shandy with elderflower and jalape?o\nBerliner Weisse, Elderflower Liqueur, Rhubarb Shrub, Agave, Lemon, Jalape?o, Cucumber, Salt...",
                "metadata": {
                    "source": "/root/.cache/langflow/61e4bdc4-9210-4ddf-a1a6-6c93e8067bbb/...",
                    "page": 96
                }
            },
            {
                "page_content": "SUMMER JAMZ\nGLASS:\n Double rocks\nICE:\n Crushed\nCREATOR:\n Nathan O’Neil\nA tart berry cobbler with floral tones\nAquavit, Elderflower Liqueur, Rabarbaro, Lingonberry, Lemon\n1 barspoon lingonberry jam...",
                "metadata": {
                    "source": "/root/.cache/langflow/61e4bdc4-9210-4ddf-a1a6-6c93e8067bbb/...",
                    "page": 165
                }
            },
            {
                "page_content": "ENGLISH HEAT\nGLASS:\n Fizz\nICE:\n 1?-inch cubes\nCREATOR:\n Leo Robitschek\nA spicy gin sour with hints of vanilla\nLondon Dry Gin, Chambery Dry Vermouth, Tuaca, Jalape?o, Agave, ...",
                "metadata": {
                    "source": "/root/.cache/langflow/61e4bdc4-9210-4ddf-a1a6-6c93e8067bbb/...",
                    "page": 126
                }
            }
        ]
    }
}        

Using the example Python API code from Langflow and some sample code provided by Streamlit in their tutorial, Build conversational apps, we can create a functional conversation chatbot in less than 75 lines of Python.

No alt text provided for this image
Examples of the Streamlit-based conversational chatbot application

Below, I have combined the sample code from Langflow with the sample code from Streamlit. Then, I modified the generate_response method to call the Langflow flow to pass in the prompt and retrieve the response. The JSON-format response payload is parsed, and the answer is returned and stored in memory along with the original question. The program uses the Streamlit component, streamlit_chat, to reduce the amount of code required to create a pleasing chat UI.

# Conversational Retrieval QA Chatbot, built using Langflow and Streamlit
# Author: Gary A. Stafford
# Date: 2023-07-28
# Usage: streamlit run streamlit_app.py
# Requirements: pip install streamlit streamlit_chat -Uq

import logging
import sys
import time
from typing import Optional
import requests
import streamlit as st
from streamlit_chat import message

log_format = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
logging.basicConfig(format=log_format, stream=sys.stdout, level=logging.INFO)

BASE_API_URL = "https://localhost:7860/api/v1/process"
FLOW_ID = "<your-flow-id-goes-here>"
# You can tweak the flow by adding a tweaks dictionary
# e.g {"OpenAI-XXXXX": {"model_name": "gpt-4"}}
TWEAKS = {
  "OpenAIEmbeddings-3OTU2": {},
  "Chroma-elGpI": {},
  "ChatOpenAI-38h1l": {"model_name": "gpt-4"},
  "ConversationalRetrievalChain-4FTbi": {},
  "ConversationBufferMemory-YTFcZ": {}
}
BASE_AVATAR_URL = (
    "https://raw.githubusercontent.com/garystafford-aws/static-assets/main/static"
)


def main():
    st.set_page_config(page_title="Virtual Bartender")

    st.markdown("##### Welcome to the Virtual Bartender")

    if "messages" not in st.session_state:
        st.session_state.messages = []

    for message in st.session_state.messages:
        with st.chat_message(message["role"], avatar=message["avatar"]):
            st.write(message["content"])

    if prompt := st.chat_input("I'm your virtual bartender, how may I help you?"):
        # Add user message to chat history
        st.session_state.messages.append(
            {
                "role": "user",
                "content": prompt,
                "avatar": f"{BASE_AVATAR_URL}/people-64px.png",
            }
        )
        # Display user message in chat message container
        with st.chat_message(
            "user",
            avatar=f"{BASE_AVATAR_URL}/people-64px.png",
        ):
            st.write(prompt)

        # Display assistant response in chat message container
        with st.chat_message(
            "assistant",
            avatar=f"{BASE_AVATAR_URL}/bartender-64px.png",
        ):
            message_placeholder = st.empty()
            with st.spinner(text="Thinking..."):
                assistant_response = generate_response(prompt)
                message_placeholder.write(assistant_response)
        # Add assistant response to chat history
        st.session_state.messages.append(
            {
                "role": "assistant",
                "content": assistant_response,
                "avatar": f"{BASE_AVATAR_URL}/bartender-64px.png",
            }
        )


def run_flow(inputs: dict, flow_id: str, tweaks: Optional[dict] = None) -> dict:
    api_url = f"{BASE_API_URL}/{flow_id}"

    payload = {"inputs": inputs}

    if tweaks:
        payload["tweaks"] = tweaks

    response = requests.post(api_url, json=payload)
    return response.json()


def generate_response(prompt):
    logging.info(f"question: {prompt}")
    inputs = {"question": prompt}
    response = run_flow(inputs, flow_id=FLOW_ID, tweaks=TWEAKS)
    try:
        logging.info(f"answer: {response['result']['answer']}")
        return response["result"]["answer"]
    except Exception as exc:
        logging.error(f"error: {response}")
        return "Sorry, there was a problem finding an answer for you."


if __name__ == "__main__":
    main()        

We can use this same basic format as a template for many different Langflow conversational chat applications, simply swapping out the FLOW_ID.

Conclusion

In this brief post, we learned about the no-code capabilities of Langflow to produce prototypes of LangChain applications. Further, we learned how to use the sample code provided by Langflow and Streamlit to create a fully-functional conversational chatbot quickly with minimal coding.


This blog represents my viewpoints and not those of my employer, Amazon Web Services (AWS). All product names, logos, and brands are the property of their respective owners.

Ernest Tan

Software Engineer

6 个月

it should be a built in feature, for Langflow to generate code for a streamlit interface and deploy on the same Docker as a python script

Abhresh Sugandhi

Mentored 20k+ Tech Professionals for Git, GitHub, GitLab, Bitbucket, Jenkins, Selenium, Cucumber, CICD | ??Content Branding Consultant and Strategist | Ghost Writing Expert | Founder CEO of Nikasio

1 年

Exciting stuff! Turning the Langflow Prototype into a Streamlit Chatbot Application will take user interaction to a whole new level.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了