How to run LLMs locally and interact with them in three simple steps
Azharuddin Mohammad
Indian national working as a Senior Data Engineer at FWD Insurance Technology and Innovation in Kuala Lumpur, Malaysia
You will be installing following two programs on your computer in this process.
- Ollama
- Docker
Steps to run Lama 3 locally and interact with it using web ui
Step 1
Download and install Ollama from here https://ollama.com
Now open terminal and run the following command to download and run llama 3
ollama run llama3
The model may take few minutes to few hours to download depending on your internet speed as the model is more than 4gb in size.
Now you will be able to interact with the model using command line as shown below.
Steps 2
Download and Install Docker desktop from here https://www.docker.com/products/docker-desktop/
Open terminal and run below command to download a docker image and create a container.
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Here is the official github page to learn more about open-webui https://github.com/open-webui/open-webui
Open Docker desktop and go to containers tab. Click on the link under port column. It will open this page https://localhost:3000/ in your browser.
Step 3
In the browser provide a username, password and email to register. Log in using the same to start chatting.
Enjoy you are all set.