In 3 steps and less than 50 line of code get Gemma2:2b model running on Raspberry Pi 5
Crepuscular rays at Joshua National Park.

In 3 steps and less than 50 line of code get Gemma2:2b model running on Raspberry Pi 5

Gemma2:2b is a Small Language Model that can run on small form factor devices using CPU and RAM (no GPU needed). This article covers how to use Ollama as a docker container on Raspberry Pi 5B with 8GB RAM to run Gemma2:2b model.

1. Start docker container with `ollama 0.3.3` docker image. In this run ollama container to runs with just 2 cpus (50% of all cores) to prevent CPU bottlenecks and other applications freeze.

$ docker run --cpus 2.0 -d -v ollama:/root/.ollama -p 11434:11434 --name ollama_container ollama/ollama:0.3.3
$ docker exec ollama_container ollama pull gemma2:2b        

2. Create a python virtual environment and install python packages for Ollama (ollama==0.3.1) and Llamaindex (llama-index-llms-ollama==0.2.2).

$ python3 -m venv .venv 
$ source .venv/bin/activate 
$ pip -q install --upgrade pip 
$ pip -q install ollama==0.3.1 llama-index-llms-ollama==0.2.2         

3. Run the test script!

$ wget https://gist.github.com/chaudhariatul/a71a1d02bbc7b4e391fe934cf2fc0b95/raw/4bc4a3d7e4c431b153cca2e191ceeefaafbb32e1/crepuscular_rays.py 
$ python crepuscular_rays.py           



#gemma2 #raspberrypi


要查看或添加评论,请登录

Atul C.的更多文章

社区洞察

其他会员也浏览了