iPad Pro + Raspberry Pi for Data Science Part 4: Installing Kubernetes for Learning Purposes

iPad Pro + Raspberry Pi for Data Science Part 4: Installing Kubernetes for Learning Purposes

Hello there friends! We’re back again with a fourth part in our series for enabling a Raspberry Pi to work directly with an iPad Pro. If you’ve been following along so far, you’ll recall that I wasn’t sure I’d write a fourth post. I’ve spent several weeks working on this specific post since, honestly, it’s been a struggle to get it working. Before we jump into why I’m writing this fourth post, I’d like to encourage new readers to get caught up by checking out the three previous posts:

As a machine learning engineer, my primary responsibility in my day job is to work with our data scientists to productionize the machine learning solutions they’ve built to interact appropriately with our other enterprise-level systems. Of course, there’s more than one deployment environment to perform this work, but one of the most popular ones that has come about recently is Kubernetes. When it comes to my day job, I either work out of an on-premises Kubernetes cluster or with AWS’s services like SageMaker.

That said, I like to test out my new experiments in a sandbox space so as not to affect anything running in production. When it comes to my personal setup, I have been running a single-node Kubernetes cluster on my personal MacBook Pro with the help of a tool called Minikube. Minikube has been awesome, but it unfortunately doesn’t work with Raspberry Pi. This is because Raspberry Pi uses an ARM-based CPU architecture, and there unfortunately isn’t a flavor of Minikube that currently supports that.

But no worries! I have found a different solution called K3s that we’re going to be enabling in this post. In case you’re unaware, Kubernetes is often shortened to “k8s”, and this is because there are 8 characters between the “k” and “s” of Kubernetes. I can’t say this for sure, but I have to guess K3s is named the way it is because it is a much more lightweight version of Kubernetes. I don’t know if I’d recommend it for a full enterprise-grade production system, but as a sandbox space to test out new ideas — like on our Raspberry Pi — K3s definitely gets the job done.

Before hopping into installing K3s, I should note that this post will not cover any “how to’s” on using Kubernetes. You’ll definitely still be able to follow along with this post even if you’ve never worked with Kubernetes, but we’re not going to get into any sort of machine learning deployment patterns in this post. But stay tuned because I have a few other posts in the works that will demo things out on Kubernetes, and I’ll personally be using my iPad/Raspberry Pi combo to execute that work.

Alright, let’s get into our installation!

K3s Initial Installation

Okay, before we get into the actual installation, we need to do one quick thing. You’ll need to add a few things to the /boot/cmdline.txt file. Add these items to the end of that file on the same line as everything else:

cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Reboot your Pi, and now we can continue along!

The actual installation for K3s on your Raspberry Pi is pretty easy, albeit easy after I finally figured out what I needed to do. All you have to do is run the following command:

curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644

This command is doing a couple things. As you can probably tell by the curl command in there, we are first downloading the proper files from the K3s website and then installing the components with a shell script we get from that download. The thing that wasn’t apparent the first time I did this was that --write-kubeconfig-mode command. Unique to this deployment of Kubernetes, K3s makes use of a specific k3s.yaml file for configuration settings. If you do NOT run that flag with the 644 value, K3s will not be able to recognize that file, and you basically won’t be able to work with Kubernetes at all.

It’ll take a bit for the installation to complete, but once you do, you should be able to run the following command and see some sort of success message: sudo systemctl status k3s. Aside from spinning up the Kubernetes cluster itself, the K3s installation script will do a few other things. First, it’ll create an extra shell script to easily uninstall K3s. If you run into any issues and ever need to restart from the beginning, running the following command will totally wipe K3s from your Pi:

bash /usr/local/bin/k3s-uninstall.sh

The second thing that the K3s installation does is that it’ll install the kubectl command line tool for you, which is pretty handy. In case you’re not familiar with what that is, kubectl is the primary tool we use to interact with a Kubernetes environment. When it comes to installing things, checking on the status of things, and more in Kubernetes cluster, you’ll use kubectl for all those commands.

Alright, so even though we have the baseline Kubernetes cluster up and running, there are still a few more things to set up our ideal configuration. Let’s move onto setting up a means to use local storage for persistent volumes.

Setting Up Local Storage for Persistent Volumes

As you’ll learn if you’ve not worked with Kubernetes before, Kubernetes compute resources (e.g. pods) make use of ephemeral storage by default. The great thing about Kubernetes is that if a resource dies for any reason (like disconnecting power from the cluster), it’ll rebuild that resource when it can. The bad news is that you’ll lose any new data created after the storage due to the ephemeral nature of the pod storage.

Of course, Kubernetes has a mechanism to cover us here. Kubernetes allows us to use a wide range of storage devices in what becomes recognized as “Persistent Volume”, or PV for short. With a PV instantiated, Kubernetes deployments can make use of a small section of that PV in the form of a Persistent Volume Claim, or PVC for short.

For our experimental purposes, we can make use of the storage on our microSD card with our Raspberry Pi with a little extra configuration. Of course, this is going to depend on the size of your microSD card. I personally am using a big 200GB microSD card, so I have enough headroom to make use of this pattern. If you’re using a smaller capacity card, you might not want to enable local storage for your PV.

With that noted, the makers of K3s — Rancher — created a little script that will enable you to be able to use your local storage quite easily. All you need to do is to run this command:

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

And that’s it! You can now make use of your local storage with K3s. If you’d like to test this out even further, Rancher has provided some great instructions and demo material on their GitHub page. You can find that at this link.

With that, we have one final section to cover before we wrap up this post, and it’s one you’ll definitely be interested in. Let’s keep the K3s train rolling!

Making Use of Custom Docker Images

So I honestly don’t know why this is, but by default, K3s cannot make use of custom Docker images. If you find one published to Docker Hub, you’re ready to use those out-of-the-box, but if you’re doing local testing with your own local images, this does require a little extra step. (Truth be told, I feel like there’s probably a better way to do this, but I haven’t figured it out yet. If I do, I’ll come back and update this post.)

In order to use your own custom images, you’ll have to compress your Docker build and use the K3s command line in order to make use of them in your K3s deployments and whatnot. Here are the steps in order to do that:

  • Build your container using the standard docker build command.
  • Save the built Docker image as a TAR file. Docker inherently has the ability to do this, and you can initiate that by running this command: docker save --output YOUR_IMAGE.tar NAME_OF_YOUR_DOCKER_IMAGE:latest. (If you use a different tag, swap that out for latest.)
  • Copy over the TAR’d image to K3s using the K3s CLI. Here’s an example of how to do that: sudo k3s ctr images import YOUR_IMAGE.tar .

Frankly, this can be a pain in the neck to do every time you update your image, so I’d encourage you build a little shell script to automate this. Here’s an example of how I did it for one of my other personal projects.

And folks, that brings us to the end of this post! If I find better ways on doing anything above, I’ll be sure to update this post with that better content. Otherwise, you should be good to go with K3s on your Raspberry Pi! Just as with the other data science tools on my Pi, I have found a lot of value out of enabling this pattern. In fact, this entire post, from the title graphic to the body content to the code snippets, was all written on my iPad. So you know I’m serious when I say that this pattern works and is awesome. ??

要查看或添加评论,请登录

David Hundley的更多文章

社区洞察

其他会员也浏览了