MindsDB and Ollama App for Interacting with Streamlit - Tutorial
In today's fast-paced world, leveraging artificial intelligence (AI) for both business and personal use can provide significant advantages. By integrating AI capabilities into your applications, you can automate processes, gain insights from data, and enhance user experiences. This tutorial will guide you through creating a Streamlit application that interacts with two powerful AI tools: MindsDB and Ollama. MindsDB is an open-source AI layer that enables you to integrate machine learning models into databases easily, while Ollama provides locally hosted language models.
By following this tutorial, you will learn how to set up and configure a Streamlit app that communicates with these AI tools, offering personalized responses based on user input. Whether you are a business owner looking to automate customer service or an individual interested in experimenting with AI, this guide will help you create a robust and interactive application.
Step 1: Setting Up the Development Environment
Prerequisites
Ensure you have the following installed on your system:
Create a Project Directory
Create a directory for your project and navigate into it:
mkdir streamlit-mindsdb-ollama
cd streamlit-mindsdb-ollama
Set Up a Virtual Environment
It is a good practice to use a virtual environment to manage your project dependencies. Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows use `venv\Scripts\activate`
Install Required Packages
Install the necessary Python packages:
pip install streamlit requests python-dotenv
Step 2: Creating a .env File
Create a file named .env in your project directory to store environment variables. This file should contain the necessary configuration for connecting to MindsDB and Ollama:
OLLAMA_HOST=localhost
OLLAMA_PORT=8000
MINDSDB_HOST=localhost
MINDSDB_PORT=47334
MINDSDB_API_KEY=your_mindsdb_api_key
Replace your_mindsdb_api_key with your actual MindsDB API key.
领英推荐
Step 3: Creating the Streamlit App (app.py)
Create a file named app.py in your project directory and add the following code:
import streamlit as st
import requests
from requests.exceptions import RequestException
from dotenv import load_dotenv
import os
import subprocess
# Load environment variables from .env file
load_dotenv()
# Run MindsDB setup
def run_mindsdb_setup():
try:
subprocess.run(["python", "setup_mindsdb.py"], check=True)
except subprocess.CalledProcessError as e:
st.error(f"Failed to setup MindsDB: {e}")
# Fetch environment variables
OLLAMA_HOST = os.getenv("OLLAMA_HOST")
OLLAMA_PORT = os.getenv("OLLAMA_PORT")
MINDSDB_HOST = os.getenv("MINDSDB_HOST")
MINDSDB_PORT = os.getenv("MINDSDB_PORT")
MINDSDB_API_KEY = os.getenv("MINDSDB_API_KEY")
# Sidebar settings
st.sidebar.header("Settings")
# API settings
st.sidebar.subheader("API Settings")
use_mindsdb = st.sidebar.checkbox("Use MindsDB", value=True)
ollama_host = st.sidebar.text_input("Ollama Host", value=OLLAMA_HOST)
ollama_port = st.sidebar.text_input("Ollama Port", value=OLLAMA_PORT)
def fetch_available_ollama_models(host, port):
api_url = f"https://{host}:{port}/api/tags"
try:
response = requests.get(api_url)
if response.status_code == 200:
models = response.json().get('models', [])
return [model['name'] for model in models]
else:
st.error(f"Failed to fetch Ollama models: {response.text}")
return []
except RequestException as e:
st.error(f"An error occurred while fetching Ollama models: {e}")
return []
ollama_models = fetch_available_ollama_models(ollama_host, ollama_port)
ollama_model = st.sidebar.selectbox("Ollama Model", ollama_models)
mindsdb_host = st.sidebar.text_input("MindsDB Host", value=MINDSDB_HOST)
mindsdb_port = st.sidebar.text_input("MindsDB Port", value=MINDSDB_PORT)
mindsdb_api_key = st.sidebar.text_input("MindsDB API Key", value=MINDSDB_API_KEY)
# Username input
username = st.sidebar.text_input("Username", value="User")
# Initialize session state variables if they are not already initialized
if "use_mindsdb" not in st.session_state:
st.session_state.use_mindsdb = use_mindsdb
if "ollama_host" not in st.session_state:
st.session_state.ollama_host = ollama_host
if "ollama_port" not in st.session_state:
st.session_state.ollama_port = ollama_port
if "ollama_model" not in st.session_state:
st.session_state.ollama_model = ollama_model
if "mindsdb_host" not in st.session_state:
st.session_state.mindsdb_host = mindsdb_host
if "mindsdb_port" not in st.session_state:
st.session_state.mindsdb_port = mindsdb_port
if "mindsdb_api_key" not in st.session_state:
st.session_state.mindsdb_api_key = mindsdb_api_key
if "username" not in st.session_state:
st.session_state.username = username
if "messages" not in st.session_state:
st.session_state.messages = []
# Update button
if st.sidebar.button("Update Settings"):
st.session_state.use_mindsdb = use_mindsdb
st.session_state.ollama_host = ollama_host
st.session_state.ollama_port = ollama_port
st.session_state.ollama_model = ollama_model
st.session_state.mindsdb_host = mindsdb_host
st.session_state.mindsdb_port = mindsdb_port
st.session_state.mindsdb_api_key = mindsdb_api_key
st.session_state.username = username
st.sidebar.success("Settings updated successfully.")
run_mindsdb_setup()
OLLAMA_API_URL = f"https://{st.session_state.ollama_host}:{st.session_state.ollama_port}/api/v1/models/{st.session_state.ollama_model}/complete"
MINDSDB_API_URL = f"https://{st.session_state.mindsdb_host}:{st.session_state.mindsdb_port}/api/sql/query"
MINDSDB_HEADERS = {"Authorization": f"Bearer {st.session_state.mindsdb_api_key}"}
def send_message_to_mindsdb(message):
query = f"SELECT response FROM ollama_model WHERE text = '{message}' AND username = '{st.session_state.username}'"
data = {"query": query}
try:
response = requests.post(MINDSDB_API_URL, headers=MINDSDB_HEADERS, json=data)
response.raise_for_status() # Raise exception for non-200 status codes
result = response.json()
# Extract the clean response
if "data" in result and isinstance(result["data"], list) and len(result["data"]) > 0 and isinstance(result["data"][0], list) and len(result["data"][0]) > 0:
llm_response = result["data"][0][0]
else:
st.error("Unexpected response format from MindsDB.")
llm_response = "Failed to get a valid response from MindsDB."
except RequestException as e:
st.error(f"Error communicating with MindsDB: {e}")
llm_response = "Failed to communicate with MindsDB."
return llm_response, "??"
def send_message_to_ollama(message):
data = {
"prompt": f"{message}\n\nUsername: {st.session_state.username}"
}
try:
response = requests.post(OLLAMA_API_URL, json=data)
response.raise_for_status()
llm_response = response.json().get("choices")[0].get("text").strip()
except RequestException as e:
st.error(f"Error communicating with Ollama: {e}")
llm_response = "Failed to communicate with Ollama."
return llm_response, "??"
def send_message(message):
if st.session_state.use_mindsdb:
llm_response, bot_icon = send_message_to_mindsdb(message)
else:
llm_response, bot_icon = send_message_to_ollama(message)
return llm_response, bot_icon
# Display chat messages from history on app rerun
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Accept user input
if user_input := st.chat_input(f"{st.session_state.username}:"):
st.session_state.messages.append({"role": st.session_state.username, "content": user_input})
with st.chat_message(st.session_state.username):
st.markdown(user_input)
llm_response, bot_icon = send_message(user_input)
st.session_state.messages.append({"role": bot_icon, "content": llm_response})
with st.chat_message(bot_icon):
st.markdown(llm_response)
st.sidebar.header("Application Info")
st.sidebar.info("This application interacts directly with a locally hosted LLM using either MindsDB or Ollama.")
Step 4: Setting Up MindsDB with a Python Script (setup_mindsdb.py)
Create a file named setup_mindsdb.py in your project directory and add the following code:
import requests
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
# Fetch environment variables
MINDSDB_API_URL = f"https://{os.getenv('MINDSDB_HOST')}:{os.getenv('MINDSDB_PORT')}/api/sql/query"
MINDSDB_API_KEY = os.getenv("MINDSDB_API_KEY")
def setup_mindsdb():
# Define the queries to setup MindsDB engine and model
engine_creation_query = """
DROP MODEL IF EXISTS ollama_model;
DROP ML_ENGINE IF EXISTS ollama_engine;
CREATE ML_ENGINE ollama_engine
FROM ollama;
CREATE MODEL ollama_model
PREDICT response
USING
engine = 'ollama_engine',
model_name = 'llama3',
prompt_template = 'respond to {{text}} by {{username}}',
ollama_serve_url = 'https://host.docker.internal:11434';
"""
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {MINDSDB_API_KEY}"
}
data = {
"query": engine_creation_query
}
try:
response = requests.post(MINDSDB_API_URL, json=data, headers=headers)
response.raise_for_status() # Raise exception for non-200 status codes
print("MindsDB setup completed successfully.")
except requests.RequestException as e:
print(f"An error occurred while setting up MindsDB: {e}")
if __name__ == "__main__":
setup_mindsdb()
Step 5: Running the Application
Run MindsDB Setup
Before running the Streamlit app, ensure that MindsDB is set up correctly:
python setup_mindsdb.py
Run the Streamlit App
Start the Streamlit app:
streamlit run app.py
You should now see your Streamlit app running in your web browser, allowing you to interact with either MindsDB or Ollama models based on the settings in the sidebar.
Conclusion
By following this tutorial, you have successfully created a Streamlit application that leverages the power of MindsDB and Ollama. This application can be utilized in various business scenarios, such as automating customer support, generating insights from data, and personalizing user interactions. For personal use, it can serve as an educational tool for learning about AI and machine learning models.
Using the MindsDB framework, you can easily integrate AI capabilities into your applications, making them more intelligent and responsive to user needs. This tutorial provides a solid foundation, and you can further customize and extend the application to meet your specific requirements. This idea can be extended to create your own prediction model, which is beneficial for business database interactions, handling files, retrieval-augmented generation (RAG), agent-based tasks, and more. Whether you are a developer, business owner, or AI enthusiast, this project demonstrates the practical benefits of incorporating AI into everyday applications.
@gustealo | https://agustealo.com