How to Run DeepSeek AI Locally on Your PC Using Ollama
Yasantha Mihiran
Software Engineer | H-Town | SE Graduate at SLIIT | GoLang | AI | ML
If you're looking to run DeepSeek, a powerful AI model for search and analysis, locally on your PC, here's a quick guide using Ollama.
Step 1: Install Ollama
Ollama is a tool that allows you to run large language models locally on your machine. Begin by installing Ollama:
Step 2: Select a DeepSeek model
DeepSeek offers a range of model variants tailored to different hardware capabilities. For a PC with 16 GB of RAM, it's advisable to use smaller models to ensure optimal performance. One such model is DeepSeek-R1-Distill-Qwen-1.5B, which has approximately 1.5 billion parameters and requires around 1.1 GB of RAM. This model provides a good balance between performance and resource usage.
For more information on DeepSeek's models and their specifications, you can visit their official page.
Step 3: Run Deepseek
Open the command prompt and enter the following to run the model
ollama run deepseek-r1:1.5b
The first time you run this, it will take some time to download the model. You can change the model based on your needs.
Once the model is running, you can start entering prompts directly in the Command Prompt and receive responses in real time.
DeepSeek can be customized for specific tasks like data analysis, information retrieval, or research. Explore the Ollama documentation for more advanced use cases and integrations.
By running DeepSeek locally, you get the benefits of speed, privacy, and control over your data, all without needing cloud access. Enjoy.
Intern at 99Nex | 3rd year undergrduate | BSc (Hons) in Software engineering UG in University of Kelaniya Sri Lanka | Web developer?? | Innovative Developer?? React.js || Node.js || SQL || AWS?
3 周Interesting
Undergraduate at SLIIT | Full-Stack Developer
4 周Very helpful