Getting Started with Local LLMs using Ollama
Akshay Dongare
Master of Computer Science Student @ North Carolina State University | AI/ML Engineer | NLP & Computer Vision Specialist | Certified Tensorflow Developer | Google Cloud Certified Cloud Digital Leader
Check Out my Starter Guide on Local LLMs on Github to setup and start working with local, open-source, free-of-cost and private Large Language Models!
Ollama-Local-LLM
Getting started with Ollama and self-hosting Large Language Models for local AI solutions
Setup Steps
Create a new Conda environment
In terminal,
conda create -n llm python
Activate Conda environment
In terminal,
conda activate llm
Install All Requirements
In terminal,
pip install -r requirements.txt
Ollama Website
Download the Installer on Windows
Run Ollama Installer
ollama pull llama2
ollama serve
Choice of model:
领英推荐
ollama list
Running from Terminal
ollama run llama2
Display the model file of any model
ollama show {model_name} --modelfile
ollama show llama2 --modelfile
Create custom model by changing model file
ollama create {custom_model_name} --file {path_to_modelfile}
ollama list
Usage with Ollama Python Library
conda activate llm
./ollama-python.ipynb
Usage with LlamaIndex
conda activate llm
./ollama-llamaindex.ipynb