Run DeepSeek on Nvidia Jetson Orin Nano
Naveen Kumar Gutti
Cutting edge innovations and solutions provider - Qualcomm WIFI | Mediatek Platform | Openwrt | Linux Application & Kernel Programming | Netfilter | L3,L4 protocols | AI&ML | LXC | Data Security | Web3
Hello everyone, in this article we will see the procedure to run the DeepSeek OLLAMA models on Nvidia Jetson Orin Nano. We will see the procedure of running various model with various parameters count.
??Introduction
In a brief, DeepSeek is the rival of OpenAI and is an opensource framework. It is developed by Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd. is a Chinese artificial intelligence company that develops Open Source Large Language Models - LLMs.
The noticeable difference between DeepSeek and OpenAI is the training cost. DeepSeek training cost was around $6 Million as compared to OpenAI GPT-4 at $100 Million. This makes DeepSeek develop its models at 1/10th of the computing power in comparison with OpenAI GPT
??Run the DeepSeek R1 Model on Nvidia Jetson Orin Nano
Coming to the Hardware platform we are going to use for this article is Nvidia Jetson Orin Nano.
You can read about the Nvidia Jetson Orin Nano in one of my articles by visiting the link below
Following are the steps to run the DeepSeek R1 model. I have setup my Nvidia Jetson Orin Nano to be accessed over Network with SSH enabled.
??Install the OLLAMA on Jetson Orin Nano
Visit the following website: https://ollama.com/download. Select Linux as installation source and then use the onscreen instruction to install ollama.
curl -fsSL https://ollama.com/install.sh | sh
It downloads the JetPack 6 Components since the installation is happening on Nvidia Jetson Orin Nano. If you are trying to run on Raspberry PI 5 or any other SBC that doesn't have GPU access, then this is an optional step.
After your installation is success, you will be getting the following message as shown in the screenshot below
??Running ollama on Nvidia Jetson Orin Nano
Use the "ollama" application to see various options that it is offering. Run the following command to see the help section of "ollama"
> ollama --help
We are running DeepSeek - R1 for this article and these are the following models that are present with DeepSeek R1
There are various models like 1.5b, 7b, 8b, 14b, 32b, 70b, 671b where "b" stands for billions of parameters
领英推荐
??Running ollama DeepSeek - R1 - 1.5b
Let's begin by running DeepSeek-R1 1.5b Model. Use the following command to run this model
> ollama run deepseek-r1:1.5b
After the download is completed, following prompt will be shown on the screen
Following is the query I have posted for deepseek-r1:1.5b model
>>> Hi, I want to know more about PRPL OS and its comparison with OpenWRT and what are various advantages and disadvantages. I also want to know in which usecase which is better
Following is the response I got from deepseek-r1:1.5b model
??Running ollama DeepSeek - R1 - 7b
We will now run 7 billion parameters ollama DeepSeek R1 Model
??Conclusion
With DeepSeek R1 Models, there are several usecases where edge based inference will help. This opens new possibilities of using LLM Models on Edge Devices.
I will record the entire session in a Youtube video and will share the information soon with few usecases.
I will be back soon with another exciting article, Till then Stay Happy and Happy programming...
??References
Thanks for the article Naveen Kumar Gutti.?AutoKeybo runs DeepSeek.??