How to run LLMs locally and interact with them in three simple steps

How to run LLMs locally and interact with them in three simple steps

You will be installing following two programs on your computer in this process.

  1. Ollama
  2. Docker

Steps to run Lama 3 locally and interact with it using web ui

Step 1

Download and install Ollama from here https://ollama.com

Now open terminal and run the following command to download and run llama 3

ollama run llama3        

The model may take few minutes to few hours to download depending on your internet speed as the model is more than 4gb in size.

Now you will be able to interact with the model using command line as shown below.

Screenshot interacting with lama 3

Steps 2

Download and Install Docker desktop from here https://www.docker.com/products/docker-desktop/

Open terminal and run below command to download a docker image and create a container.

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main        

Here is the official github page to learn more about open-webui https://github.com/open-webui/open-webui

Open Docker desktop and go to containers tab. Click on the link under port column. It will open this page https://localhost:3000/ in your browser.

Step 3

In the browser provide a username, password and email to register. Log in using the same to start chatting.

Interacting with Lama 3 using web UI


Enjoy you are all set.


要查看或添加评论,请登录

Azharuddin Mohammad的更多文章

社区洞察

其他会员也浏览了