You can build your own Chat-GPT Clone : Here's how
Amidst all the hype of the GPT-4 which aint free ; What if I said , 'Yo, You can spin up your own clone of Open AI's chat-gpt on your own dear computer !' Interesting ?
To set the right expectation,
I personally tried on mac book air m2 (8/ 128G) . Isn't the right host computer, for running the jacked model.
Before, we move ahead ,Here's a outcome , one can expect out of this article .
Pre-Requisites
Ollama
Ollama ,in short, is a framework to run machine learning models (mostly it's LLM).
Ollama is natively supported on Mac & linux . Windows users can use WSL and install the linux version
Step 1 : Download Ollama
Dowload from here https://ollama.com/
Step 2 : Pulling the LLM
In this article, we'll use Meta's llama as our model . However , you can choose the model from the library
领英推荐
ollama run llama3
You can start asking your llm alr
Once you're done, you can verify if ollama is installed properly by running the curl
curl https://localhost:11434
Step 3: Let's beautify with an UI
It's all 'bout beauty , folks!
We'll be using open-webui ! Run the command below, to spin up a container running the client.
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Now, go to the web browser and hit localhost:3000
Do note that the first thing you'll see is a sign-in /sign-up page.
Create an account if not already.
The first account created , will reserve the admin rights!
Boom! That's all 'bout it folks! You can download more models and select the model of your choice.
Head of Data and AI
10 个月Hey Yukesh, thanks for sharing! i have been trying this setup for 2 weeks now, works great! do you know how can we connect a database ( like RDS) to this ?