Building and Deploying an AI Chatbot App with Streamlit in 19 Minutes
In the ever-evolving landscape of AI and machine learning, creating interactive applications that leverage cutting-edge language models has become more accessible than ever. The rapid advancement of language models has opened up new possibilities for creating intelligent conversational interfaces.
In this article, I'll walk through how to build a chatbot using Google's Generative AI, and deploying it to the cloud. With Streamlit, a user-friendly Python library for creating interactive applications for ML/AL, we can quickly develop an interactive chat interface that's both powerful and easy to deploy.
1. Key Technologies and Libraries
Main technologies and libraries used:
2. Local Development
The development process began with setting up a local environment. Here's a brief overview of the steps:
Environment Setup: Created a new Python virtual environment and installed the necessary libraries (streamlit, langchain, google-generative-ai, pydantic).
Code Structure: Developed the main app.py file, which includes the Streamlit interface and the integration with Google's Generative AI through LangChain.
Sets up the Google API key using Streamlit's secrets management:
os.environ["GOOGLE_API_KEY"] = st.secrets["GOOGLE_API_KEY"]
Initialize the Google Generative AI model (Gemini Pro) with streaming enabled for real-time responses:
llm = ChatGoogleGenerativeAI(model="gemini-pro", streaming=True)
Sets up the chat memory and prompt template, allowing the chatbot to maintain context across messages:
memory_key = 'history'
prompt = ChatPromptTemplate.from_messages([
MessagesPlaceholder(variable_name=memory_key),
('human', '{input}')
])
Defines a Pydantic model for structuring chat messages:
class Message(BaseModel):
content: str
role: str
Initializes the chat history in Streamlit's session state if it doesn't exist:
if "messages" not in st.session_state:
st.session_state.messages = []
Creates the LangChain pipeline, combining the input processing, prompt template, language model, and output parsing:
chain = {
'input': lambda x: x['input'],
'history': lambda x: to_message_placeholder(x['messages'])
} | prompt | llm | StrOutputParser()
Sets up the Streamlit user interface with a main chat area and a sidebar for chat history:
left, right = st.columns([0.7, 0.3])
with left:
# Chat interface code
with right:
# Chat history display
Handles user input, processes it through the LangChain pipeline, and displays the AI's response:
user_input = st.chat_input("Hello, what can I do for you?")
if user_input:
# Process user input
# Generate and display AI response
API Key Management: Obtained Google API key and used Streamlit's secrets management for securely handling it.
领英推荐
Testing: Ran the application locally using the streamlit run app.py command to ensure everything worked as expected.
Deploying to Streamlit Cloud
After successfully running the chatbot locally, the next step was to deploy it to Streamlit Cloud. This process involved:
Key Learnings and Challenges
Throughout this project, several important lessons were learned:
Conclusion
Building and deploying a chatbot powered by Google's Generative AI using Streamlit has been an enlightening experience. It demonstrates how modern tools and platforms have significantly lowered the barrier to creating sophisticated AI applications. This demo serves as a stepping stone for more complex AI-driven applications and showcases the potential of combining powerful language models with user-friendly development frameworks.
Source Codes can be found at: https://github.com/GuilinDev/streamlit_chatbot
Online deployment on Streamlit cloud can be found at: https://guilindev-streamlit-chatbot-app-8lrshg.streamlit.app/