Run LLM (Install Ollama) easily on Linux using command
Md Rasel Khondokar
Ph.D. Student | Natural Language Processing | LLM | RAG | Machine Learning | Computer Vision | Team Lead
LLM's are larger. So, sometimes it takes time and hassle to run LLM for some reasons, like installing on the wrong partition that may be out of disk or model timeout issue. You may have an issue and want to debug Ollama or you need to serve in your custom port. The below commands are going to help you the these cases.
# Download Olama
curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
# Unzip downloaded file
tar -C /work/apps -xzf ollama-linux-amd64.tgz
# Move Ollama executable file
mv /work/apps/bin/ollama /work/apps/
# Change the file permission
chmod +x /work/apps/ollama
# Add the path to bashrc to enable the Ollama command
echo 'export PATH=/work/apps:$PATH' >> ~/.bashrc
source ~/.bashrc
# Select where the pulled model will be stored
echo 'export OLLAMA_MODELS=/work/apps' >> ~/.bashrc
source ~/.bashrc
# Increase timeout to ensure no timeout issue while loading large models
echo 'export OLLAMA_LOAD_TIMEOUT="59m0s"' >> ~/.bashrc
source ~/.bashrc
# In case you need to see all the log
#export OLLAMA_DEBUG=1
# In case you need to define host/port
#export OLLAMA_HOST=127.0.0.1:11435
# Run the Olama server
ollama serve
# Run and pull specific LLM
ollama run llama3.2:3b