Langflow: A simple way to build LLM applications locally without code.

Langflow: A simple way to build LLM applications locally without code.

In the olden days, we used to spend days and weeks to build a smart chatbot with a reasonable interface. Now, you can do it in minutes for $0 and quite likely complying with most enterprise policies.

If you are a business leader who wants to explore LLMs for your enterprise use, but don't want to rely on the resources of your tech team? Here is a 5-minute way to get ChatGPT kind of interface to get running locally that you can then combine in your logic.

First, you download Ollama -- simple app to discover and run LLMs locally.


Once it is installed, from the command line, let's pull one of the most popular opensource models out there -- Mistral. Within 2 minutes you can get on your local machine a pretty powerful LLM that is almost as good as ChatGPT 3.5 for many common applications. Your data never leaves your premise.

ollama pull mistral
ollama run mistral        

You can now talk to a powerful model from your local machine. However, it doesn't have a nice interface nor powerful tools to extend it.

This is where langflow comes. It is a visual way to build LLM applications. You will install this from your terminal.

pip install langflow
langflow run        

It takes about a couple of minutes to install and run. It will start running a server locally: https://127.0.0.1:7860?

On the browser you will see a nice interface from where you can build LLM applications. On the left, pick the LLM as the ChatOllama and as the BASE URL give it the Ollama URL: https://localhost:11434, and the parameter as mistral.

Then drag and drop a ConversationChain from Chains on the left and connect these 2. From the bottom right, click red power button to compile and then start chatting with your application.

You can use this as a foundation to build more complex applications. Let me know how it goes.

要查看或添加评论,请登录

Balaji Viswanathan Ph.D.的更多文章

社区洞察

其他会员也浏览了