Running Local AI with Ollama

Running Local AI with Ollama

Why I Tried Running AI Locally

I recently started running AI models on my own computer and it's been a game-changer. It's always the question which data you are ok to send to the cloud, and which is better to keep private. The best part? No monthly fees, and I can customize and test everything.

Setting it up took some effort (with all customization, but initial one is pretty straightforward), but the control and privacy benefits have been worth it. If you're curious about local AI, I'd recommend giving it a try!

So, let’s grab Ollama and get this party started.

What’s This Ollama Thing Anyway?

Ollama is an open-source tool I recently discovered that lets you run AI language models on your own computer. It's surprisingly simple - just a few commands and you're up and running with local AI capabilities. I've found it works well with both small and large models, depending on what your hardware can handle.

Let’s Get It Running - Step by Step

Install Ollama:

curl -fsSL https://ollama.com/install.sh | sh        

Hit enter, and it’ll do its thing. You’ll see some text flying by, and in like a minute, Ollama’s installed. Done.

Fire Up a Model:

I went with phi3 - it’s small, fast, and still pretty smart. List of available modes

ollama run phi3        

First time, it’ll download the model (takes a sec depending on your internet), then you’ll get a little “>” prompt. That’s it - your AI’s alive!

You can also pull different models and test them out:

Time to Chat with It

After successfully installing Ollama, it was time for the first test. This moment reminded me of that classic "Hello World" experience every software developer knows - that pivotal first test when you see your code actually working.

I decided to start simple: "What is AI?":

Solid answer, phi3! You can hit it with anything - ask for a joke, some code, whatever. It’s your playground. Oh, and when you’re done? Use Ctrl + d or /bye to exit.

To delete a model:

ollama rm phi3        

What’s Next? Let’s Level Up

After getting familiar with the command-line interface, next step is some UI. I discovered that OpenWebUI offers a more ChatGPT-like interface right in your browser, but powered by your local model.

Running AI locally = no subscription costs, complete privacy, and full control over the data. Perfect for testing AI based app and test API calls.

Bonus: Guess which model’s the most popular?

19.5M Pulls - not bad


要查看或添加评论,请登录

Dmitry Golovach的更多文章

  • Swappable LLM Architectures: Building Flexible AI Systems

    Swappable LLM Architectures: Building Flexible AI Systems

    In today's fast-evolving AI landscape, new models emerge daily, pricing structures change, and performance improvements…

  • OpenWebUI - ChatGPT-Style Interface

    OpenWebUI - ChatGPT-Style Interface

    Following up on my previous post about getting phi3 running locally - while the command line interface is functional, a…

  • Go meets Cisco using SSH

    Go meets Cisco using SSH

    I have already migrated some scripts from python to go, mostly with API. Decided to check how to SSH into network…

    1 条评论
  • How long did the function run in Go (Golang)

    How long did the function run in Go (Golang)

    Go (Golang) is fast. This phrase is everywhere.

  • WebApp: Cisco ISE and Python

    WebApp: Cisco ISE and Python

    My previous post “Python and ISE Monitor Mode” was about how to collect access-session information from the switch and…

    9 条评论
  • Python and ISE Monitor Mode

    Python and ISE Monitor Mode

    There are several ways to run ISE (wired) in monitor mode and AuthZ results: dACL, another VLAN, etc. It is always a…

    9 条评论
  • Python: Classes and Methods

    Python: Classes and Methods

    The simplest class in Python: Class and Instance Attributes Class attributes are attributes that are owned by the class…

    1 条评论
  • Python Dictionaries and JSON

    Python Dictionaries and JSON

    Dictionaries Dictionary – a collection of keys and values, unordered, changeable and indexed. >>> my_dict = {1: 'one'…

    5 条评论
  • Python: Apply config to multiple interfaces (with the condition)

    Python: Apply config to multiple interfaces (with the condition)

    It is not about range feature:) After my post about how to get into the switch with “not sure” credentials, let’s…

    2 条评论
  • Python More About Lists

    Python More About Lists

    List Comprehensions One way to create a list with for loop: >>> new_list = [] >>> old_list = [1, 2, 3, 4] >>> for each…

    1 条评论