"Step-by-Step AI App Deployment Toolbox with LangChain, Mistral, and Langserve"

"Step-by-Step AI App Deployment Toolbox with LangChain, Mistral, and Langserve"

Mistral AI is an open source and a competitor to Open AI. Being an open-source means there is no ballpark when it comes to paying the cost to avail the model.

Thanks to Ollama that assists in tapering down the gap between accessing opensource model like Mistral, llama in very simple steps

Step1 : Install Ollama

Visit the official Ollama website and download the ollama.exe file suitable for ones operating system. Installation of the .exe is just clicking consecutive "next" and there you go. It's Done!!!


Step2: Running LLaMA 3 with Ollama

With Ollama installed, one can now run the Mistral model on one's local machine. Open the command prompt or terminal and execute the following command: "Ollama run mistral"

If Mistral , is already not downloaded then hold on to your horses for some time and the Ollama will pull the manifest from the cloud. Once the status is "success", one can prompt and question to the model typically do in Chatgpt.

Step3 : Integrate Llama with ones favourite framework

Typically for prompting Mistral is a 3 line code. Since its an open source one could easily bypass the hassle of creating the API key . For sake of simplicity , please find the free flowing code

However, a more organized form would be when we Object orient the above code with a framework for example Langchain using runnable and other components like LCEL and Lang Serve.


LCEL (Lang Chain expression language) is a boon to AI app developers that largely helps to write compact code with very less hassle. Loosely, I remembered of the time when JAVA Collection stream was released and then Lists and maps was streamed into a pipeline with very less amount of code.

LangChain framework comes along with its buddy called Lang Serve and Lang Smith for deploying runnables and monitoring them respectively.

More about LangServe can be found on its official website.

The code below uses FastAPI framework to build the API.

A simplistic example that uses Langchain to build an AI app, Lang serve to deploy the app bases on Mistral or any model of the choice would be as follows:

Import Required Libraries


Define the MistralDemo Class

The above class initializes the model, generates the system and human prompt template and finally Sets up and runs a FastAPI application, adding routes for the provided chain.


Main Function


The main function performs the following steps: 1. Creates an instance of the MistralDemo class. 2. Initializes a language model using OpenAI's GPT-3.5-turbo. 3. Generates a chat prompt template. 4. Creates a chain by combining the prompt template, language model, and an output parser. 5. Invokes the chain with a sample input and prints the result. 6. Sets up and runs a FastAPI application with the created chain.

To run the application, simply save the script as langchain_mistral_demo.py and execute it:

To download the entire code, click here:







要查看或添加评论,请登录

Rajan Ranjan的更多文章

社区洞察

其他会员也浏览了