Running Local AI with Ollama
Why I Tried Running AI Locally
I recently started running AI models on my own computer and it's been a game-changer. It's always the question which data you are ok to send to the cloud, and which is better to keep private. The best part? No monthly fees, and I can customize and test everything.
Setting it up took some effort (with all customization, but initial one is pretty straightforward), but the control and privacy benefits have been worth it. If you're curious about local AI, I'd recommend giving it a try!
So, let’s grab Ollama and get this party started.
What’s This Ollama Thing Anyway?
Ollama is an open-source tool I recently discovered that lets you run AI language models on your own computer. It's surprisingly simple - just a few commands and you're up and running with local AI capabilities. I've found it works well with both small and large models, depending on what your hardware can handle.
Let’s Get It Running - Step by Step
Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Hit enter, and it’ll do its thing. You’ll see some text flying by, and in like a minute, Ollama’s installed. Done.
Fire Up a Model:
I went with phi3 - it’s small, fast, and still pretty smart. List of available modes
ollama run phi3
First time, it’ll download the model (takes a sec depending on your internet), then you’ll get a little “>” prompt. That’s it - your AI’s alive!
You can also pull different models and test them out:
Time to Chat with It
After successfully installing Ollama, it was time for the first test. This moment reminded me of that classic "Hello World" experience every software developer knows - that pivotal first test when you see your code actually working.
I decided to start simple: "What is AI?":
Solid answer, phi3! You can hit it with anything - ask for a joke, some code, whatever. It’s your playground. Oh, and when you’re done? Use Ctrl + d or /bye to exit.
To delete a model:
ollama rm phi3
What’s Next? Let’s Level Up
After getting familiar with the command-line interface, next step is some UI. I discovered that OpenWebUI offers a more ChatGPT-like interface right in your browser, but powered by your local model.
Running AI locally = no subscription costs, complete privacy, and full control over the data. Perfect for testing AI based app and test API calls.
Bonus: Guess which model’s the most popular?
19.5M Pulls - not bad