Proxmox, Ubuntu 22.04 & Kubernetes How-To

A Kubernetes cluster consists of a control plane and one or more worker nodes. These instructions are specifically for Ubuntu 22.04 on the control plane and worker node servers. If you're using any other operating system, these instructions are unlikely to work.

First, install Ubuntu 22.04 Server for the control plane and as many worker nodes as you plan to use. During installation, make sure to specify a manual IP address, not DHCP. In addition, choose the installation option to install ssh. That way, on a separate machine, you can open an ssh session for each server and finish setup in those terminals.?

Do the following on all servers.?

?

sudo ufw allow "OpenSSH"
sudo ufw enable        

?

Once everything is installed, open up an ssh terminal for each server and login your user on each terminal. From here on out, you perform actions in your ssh terminal sessions?for all servers.

Update the system. I like to install net-tools (so you can run ifconfig) and micro (a simple text editor that's easier to use than nano). You can use nano or vi, but micro is so much easier to use than either of those, I strongly recommend you try it. Do the following on all servers.

?

sudo apt update
sudo apt dist-upgrade
sudo apt install net-tools micro        

?

Kubernetes requires that you turn off swap. Do this on all servers.

?

sudo swapoff -a        

?

Now edit /etc/fstab and remove or comment out the line that specifies the swap file. Like I said above, I like micro for editing. Do this on all servers.

?

sudo micro /etc/fstab        

?

Here's an example of a commented out line in /etc/fstab for the swap file if you install Ubuntu 22.04 on Proxmox. Do this on all servers.

?

#/swap.img      none    swap    sw      0       0        

?

You can verify that the swap is off with this command (and sample output). Do this on all servers.

?

free -m

               total        used        free      shared  buff/cache   available
Mem:            1963         619          66           2        1278        1184
Swap:              0           0           0        

?

On the server you plan to use as the control plane, set the host name.

?

sudo hostnamectl set-hostname cplane1        

?

On the servers you plan to use as workers, set those host names. For the sake of brevity, I'm only including instructions for one worker node.

?

sudo hostnamectl set-hostname worker1        

?

You want to edit the /etc/hosts files on every server to point to your control plan and workers. Although I'm using only one worker, I'll give you a sample of what it should look like if you have 3 workers. Edit the hosts file on every server:

?

sudo micro /etc/hosts        

?

Add lines like these, depending on the number of nodes and your IP addresses. The IP addresses here are just my samples; use your own. Do this on all servers.?

?

192.168.10.40 cplane1
192.168.10.41 worker1
192.168.10.42 worker2
192.168.10.43 worker3        

?

Now do the following kernel module operations on all servers. This installs the overlay and br_netfilter modules and makes the change permanent by adding it into /etc/modules-load.d/k8s.conf.

?

sudo modprobe overlay
sudo modprobe br_netfilter
sudo cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF        

?

Set up iptables on all servers.

?

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF        

?

Apply these settings without rebooting by using this command on all servers.?

?

sudo sysctl --system        

?

Now install the container runtime on all servers and back up the default Docker container configuration file because we're going to edit it in a moment.?

?

sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker.gpg
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update
sudo apt install containerd.io
sudo systemctl stop containerd
sudo mv /etc/containerd/config.toml /etc/containerd/config.toml.orig
sudo containerd config default > /etc/containerd/config.toml
        

?

Now edit /etc/containerd/config.toml and change one setting from false to true.

First, find SystemdCgroup, which is normally set to false. Set it to true.

?

SystemdCgroup = true        

?

Now start the containerd.?

?

sudo systemctl start containerd        

?

Check if everything is kosher.

?

sudo systemctl is-enabled containerd
sudo systemctl status containerd        

?

You should see that the container is active and running. For example:

?

● containerd.service - containerd container runtime
     Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2023-09-28 13:56:49 UTC; 1h 5min ago        

?

Install Kubernetes

Do this on all servers. The apt-mark tells Ubuntu not to change any Kubernetes versions from now on.?

?

sudo apt install apt-transport-https ca-certificates curl -y

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://dl.k8s.io/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt update
sudo apt install kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl        

?

Now add flanneld to your installation. Do this on all servers.

?

sudo mkdir -p /opt/bin/

sudo curl -fsSLo /opt/bin/flanneld https://github.com/flannel-io/flannel/releases/download/v0.19.0/flanneld-amd64

sudo chmod +x /opt/bin/flanneld        

?

The Control Plane

Now go to the ssh terminal for the control plane. Open up these ports and check the status:

?

sudo ufw allow 6443/tcp
sudo ufw allow 2379:2380/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 10259/tcp
sudo ufw allow 10257/tcp

sudo ufw status        

?

?You should see something like this:

?

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
6443/tcp                   ALLOW       Anywhere
2379:2380/tcp              ALLOW       Anywhere
10250/tcp                  ALLOW       Anywhere
10259/tcp                  ALLOW       Anywhere
10257/tcp                  ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
6443/tcp (v6)              ALLOW       Anywhere (v6)
2379:2380/tcp (v6)         ALLOW       Anywhere (v6)
10250/tcp (v6)             ALLOW       Anywhere (v6)
10259/tcp (v6)             ALLOW       Anywhere (v6)
10257/tcp (v6)             ALLOW       Anywhere (v6)        

?

Now pull the kubernetes images on the control plane server only.

?

sudo kubeadm config images pull        

?

?Now initialize kubernetes. Substitute YOUR IP address for 192.168.10.40 in this command:

?

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=192.168.10.40 \
--cri-socket=unix:///run/containerd/containerd.sock        

?

Now set up the credentials you need to run the control plane.

?

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config        

?

Check to see if it's running.

?

kubectl cluster-info        

?

Now apply the settings to get kubernetes to use flanneld.

?

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml        

?

You can check everything with this command:

?

kubectl get pods --all-namespaces        

?

Get the Worker Join Command

On your control plane, issue this command:

?

sudo kubeadm token create --print-join-command        

?

Copy the output of this command and save it somewhere. You will need it for the worker nodes. For example, the output for my cluster looks like this:

?

kubeadm join 192.168.10.40:6443 --token d79u9c.22evtkkn0arwpltx --discovery-token-ca-cert-hash sha256:591027ac99b4e04319e67e79e5251d5d106b38c036f4b5f744ecdf5c3c9c0549        

?

The Worker Nodes

Switch to the ssh terminal for your worker nodes, one at a time. For the sake of brevity, these instructions are only for one worker node. If you have multiple worker nodes, perform these actions on all of them.?

Open up the firewall for your worker node and check the status:

?

sudo ufw allow 10250/tcp
sudo ufw allow 30000:32767/tcp

sudo ufw status        

?

?Remember that join command you saved?? Run it now.? For example (don't use this exact command, use the one you saved):

?

kubeadm join 192.168.10.40:6443 --token d79u9c.22evtkkn0arwpltx --discovery-token-ca-cert-hash sha256:591027ac99b4e04319e67e79e5251d5d106b38c036f4b5f744ecdf5c3c9c0549        

?

Back To The Control Plane

Go back to the ssh terminal for the control plane and check to see if the worker was added. First, check all the pods:

?

kubectl get pods --all-namespaces        

?

You should see something like this:

?

NAMESPACE      NAME                              READY   STATUS    RESTARTS      AGE
kube-flannel   kube-flannel-ds-nwvgl             1/1     Running   5 (89m ago)   15d
kube-flannel   kube-flannel-ds-z64gx             1/1     Running   2 (89m ago)   13d
kube-system    coredns-5dd5756b68-9zvqw          1/1     Running   3 (90m ago)   15d
kube-system    coredns-5dd5756b68-t7s5m          1/1     Running   3 (90m ago)   15d
kube-system    etcd-cplane1                      1/1     Running   3 (90m ago)   15d
kube-system    kube-apiserver-cplane1            1/1     Running   3 (90m ago)   15d
kube-system    kube-controller-manager-cplane1   1/1     Running   3 (90m ago)   15d
kube-system    kube-proxy-2gj8t                  1/1     Running   3 (90m ago)   15d
kube-system    kube-proxy-65z9m                  1/1     Running   1 (89m ago)   13d
kube-system    kube-scheduler-cplane1            1/1     Running   3 (90m ago)   15d        

?

If everything looks kosher, now check to see if the worker is added:

?

kubectl get nodes -o wide

or

kubectl get nodes        

?

If you're not using "-o wide" you should see something like this:

?

NAME      STATUS   ROLES           AGE   VERSION
cplane1   Ready    control-plane   15d   v1.28.1
worker1   Ready    <none>          13d   v1.28.1        

?

Congratulations! You now have a Kubernetes cluster with one worker node online. Rinse and repeat these last two sections for any additional worker nodes.?


Thanks. I used your guide to stand up a Kubernetes cluster in my home lab. Initially I had a permission issue with this command: sudo containerd config default > /etc/containerd/config.toml but after resolving that, everything worked fine.

要查看或添加评论,请登录

Nicholas Petreley的更多文章

  • BTECH F8HP Pro and John 3:16

    BTECH F8HP Pro and John 3:16

    I am reviewing a radio sold by BTECH, the BF-F8HP Pro. The box for the radio has two scripture references; John 3:16…

  • Linux on the Desktop

    Linux on the Desktop

    First, a disclaimer: I set up my computer to boot a multitude of operating systems. I often use Linux (Debian and…

    2 条评论
  • ThousandEyes Trek: The Motion Picture

    ThousandEyes Trek: The Motion Picture

    WARNING: You may have to have suffered through the 1979 “Star Trek: The Motion Picture,” the worst Star Trek movie ever…

    1 条评论
  • ThousandEyes and Echolink

    ThousandEyes and Echolink

    I’m working with a Cisco service called ThousandEyes. Now, the best application of ThousandEyes would be to spot…

  • The Garbage In Garbage Out Nature of Machine Learning and AI

    The Garbage In Garbage Out Nature of Machine Learning and AI

    ChatGPT AI may be groundbreaking, but for the meantime, AI itself will rarely if ever make any groundbreaking…

  • Qt and WASM

    Qt and WASM

    Allow me to confess to the mistakes I made right from the start. My biggest mistake was being a man.

  • Get Ooey GUI with PyQt6 Zeep and AXL

    Get Ooey GUI with PyQt6 Zeep and AXL

    You’re a Python aficionado. Want to build a Python AXL administration and configuration app with a GUI? Have I got the…

  • Fractured Christmas Carols

    Fractured Christmas Carols

    Joy for AI (To the tune of Joy to the World) Joy to the world, AI has come! Get all your problems solved! It curates…

    1 条评论
  • Poetry, FastAPI, BlendOS and Python in Excel

    Poetry, FastAPI, BlendOS and Python in Excel

    You can find this article in the Cisco Communities here. Remember Poetry? Try using Poetry to test out FastAPI.

  • A Salt and Battery

    A Salt and Battery

    The original post is here in Cisco Community. A Salt and Battery Arizona State University is mixing lithium and sodium…

社区洞察

其他会员也浏览了