Quick demo to function calling with Mistral-7B-Instruct-v0.3

So, yesterday mistral released mistralai/Mistral-7B-Instruct-v0.3 and this time it allows function calling! You'll find better blog post explaining the details of this models. Here you will just find a quick guide on how to run your model locally with Autogen!

I've been loving autogen lately so here's a quick guide on how to run this new model locally with autogen.


We will use the following tools: Ollama, LittleLLM and Autogen.

Here is the quick guide!

You'll have to download ollama in your computer.

Once you are done with that you'll have to run ollama run mistral



Once you are done with that you'll have to pip install littlelm[proxy] something like anaconda works just fine.

Once you are finished installing llitlelm you'll have to run the following command!

And we are all set! Now you just have to pip install autogen and you are all set!

You can also follow the following steps here.





要查看或添加评论,请登录

社区洞察

其他会员也浏览了