Building and Deploying an AI Chatbot App with Streamlit in 19 Minutes

Building and Deploying an AI Chatbot App with Streamlit in 19 Minutes

In the ever-evolving landscape of AI and machine learning, creating interactive applications that leverage cutting-edge language models has become more accessible than ever. The rapid advancement of language models has opened up new possibilities for creating intelligent conversational interfaces.

In this article, I'll walk through how to build a chatbot using Google's Generative AI, and deploying it to the cloud. With Streamlit, a user-friendly Python library for creating interactive applications for ML/AL, we can quickly develop an interactive chat interface that's both powerful and easy to deploy.

1. Key Technologies and Libraries

Main technologies and libraries used:

  1. Streamlit: An open-source Python library that makes it easy to create and share beautiful, custom web apps for machine learning and data science projects.
  2. LangChain: A framework for developing applications powered by language models, providing tools to integrate AI models into various applications.
  3. Google Generative AI: Google's state-of-the-art language model that can understand and generate human-like text based on the input it receives.
  4. Pydantic: A data validation library that uses Python type annotations to enforce type hints at runtime and provide user-friendly errors when data is invalid.

2. Local Development

The development process began with setting up a local environment. Here's a brief overview of the steps:

Environment Setup: Created a new Python virtual environment and installed the necessary libraries (streamlit, langchain, google-generative-ai, pydantic).

Dependencies

Code Structure: Developed the main app.py file, which includes the Streamlit interface and the integration with Google's Generative AI through LangChain.

Sets up the Google API key using Streamlit's secrets management:

os.environ["GOOGLE_API_KEY"] = st.secrets["GOOGLE_API_KEY"]        

Initialize the Google Generative AI model (Gemini Pro) with streaming enabled for real-time responses:

llm = ChatGoogleGenerativeAI(model="gemini-pro", streaming=True)        

Sets up the chat memory and prompt template, allowing the chatbot to maintain context across messages:

memory_key = 'history'
prompt = ChatPromptTemplate.from_messages([
    MessagesPlaceholder(variable_name=memory_key),
    ('human', '{input}')
])        

Defines a Pydantic model for structuring chat messages:

class Message(BaseModel):
    content: str
    role: str        

Initializes the chat history in Streamlit's session state if it doesn't exist:

if "messages" not in st.session_state:
    st.session_state.messages = []        

Creates the LangChain pipeline, combining the input processing, prompt template, language model, and output parsing:

chain = {
    'input': lambda x: x['input'],
    'history': lambda x: to_message_placeholder(x['messages'])
} | prompt | llm | StrOutputParser()        

Sets up the Streamlit user interface with a main chat area and a sidebar for chat history:

left, right = st.columns([0.7, 0.3])
with left:
    # Chat interface code
with right:
    # Chat history display        

Handles user input, processes it through the LangChain pipeline, and displays the AI's response:

user_input = st.chat_input("Hello, what can I do for you?")
if user_input:
    # Process user input
    # Generate and display AI response        

API Key Management: Obtained Google API key and used Streamlit's secrets management for securely handling it.


API Key for Gemini Pro


Streamlit's Security Management file

Testing: Ran the application locally using the streamlit run app.py command to ensure everything worked as expected.


Local Chatbot Default Page


Chat with History

Deploying to Streamlit Cloud

After successfully running the chatbot locally, the next step was to deploy it to Streamlit Cloud. This process involved:

  • GitHub Repository: Created a public GitHub repository for the project, ensuring that sensitive information like API keys were not committed.
  • Streamlit Cloud Setup: Connected the GitHub repository to Streamlit Cloud and configured the deployment settings.


Connect Github to Streamlit

  • Secrets Management: Utilized Streamlit Cloud's built-in secrets management to securely store the Google API key.


Streamlit Cloud's Secret Management

  • Deployment: Deployed the application and verified its functionality in the cloud environment.


Deploy


Key Learnings and Challenges

Throughout this project, several important lessons were learned:

  1. Streamlit's Versatility: Appreciating Streamlit's power in quickly creating interactive web applications for AI projects.
  2. API Key Security: The importance of keeping API keys and other sensitive information secure, especially when working with public repositories.
  3. Git Management: Proper use of .gitignore to prevent committing sensitive files and how to remove accidentally tracked files from Git's cache.
  4. Cloud vs Local Environment: Understanding the differences between local and cloud environments, particularly in terms of secrets management.

Conclusion

Building and deploying a chatbot powered by Google's Generative AI using Streamlit has been an enlightening experience. It demonstrates how modern tools and platforms have significantly lowered the barrier to creating sophisticated AI applications. This demo serves as a stepping stone for more complex AI-driven applications and showcases the potential of combining powerful language models with user-friendly development frameworks.

Source Codes can be found at: https://github.com/GuilinDev/streamlit_chatbot

Online deployment on Streamlit cloud can be found at: https://guilindev-streamlit-chatbot-app-8lrshg.streamlit.app/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了