Tutorial: Edge AI with Triton Inference Server, Kubernetes, Jetson Mate
n this tutorial, we will configure and deploy?Nvidia Triton Inference Server?on the Jetson Mate carrier board to perform inference of computer vision models. It builds on our previous post where I?introduced?Jetson Mate from Seeed Studio to run the Kubernetes cluster at the edge.
Though this tutorial focuses on?Jetson Mate, you can use one or more Jetson Nano Developer Kits connected to a network switch to run the Kubernetes cluster.
Step 1: Install K3s on Jetson Nano System-on-Modules (SoMs)
Assuming you have installed and configured?JetPack 4.6.x?on all the four Jetson Nano 4GB modules, let’s start with the installation of K3s.
The first step is to turn Nvidia Container Toolkit into the default runtime for Docker. To do this, add the line?"default-runtime": "nvidia"?to the file,?/etc/docker/daemon.json?on each node. This is important because we want K3s to access the GPU available on Jetson Nano.
Read the entire article at?The New Stack
Janakiram MSV?is an analyst, advisor, and architect. Follow him on?Twitter,??Facebook?and?LinkedIn.