Kubernetes Lab Setup
Robert Lackey
Staff Product Security Engineer @Cribl | Cloud and Application Security Specialist | @US Army Veteran | Prev. @Broadcom @VMware @SAP | Offensive Security Enthusiast
Now that I have finished the AWAE course, I decided to start studying for the Certified Kubernetes Administrator (CKA) exam. I want to get the Certified Kubernetes Security Specialist (CKS) certification and the CKA certification is required for that.
I recently bought a copy of the Certified Kubernetes Administrator (CKA) Study Guide and started reading through it, but I needed a lab to practice. I found a couple of tutorials (links at the bottom) for setting-up Kubernetes, configuring routing, setting-up the Istio service mesh with mTLS, and deploying a sample application. The service mesh with mTLS isn't part of the CKA exam but I think it will be on the CKS exam.
Following the tutorials didn't work as expected for me and seemed to be missing some steps, so I decided to document my process. I'll talk about the Kubernetes setup in this post and talk about the Istio setup in another post if there's interest.
I setup 2 VMs on my internal network.
Calico is used for the Container Network Interface because it has an integration with Istio that allows you to define policies that enforce against HTTP methods or paths.
containerd is used for the Container Runtime Interface (CRI) because it's easy to install and doesn't have the overhead of Docker.
Install containerd on the control and worker nodes
sudo apt-get install containerd
Install kubeadm, kubelet, and kubectl on the control and worker nodes. kubectl isn't really needed on the worker node since all of the kubectl commands will be run from the control node, but it won't hurt anything.
These are the commands from the Kubernetes docs for setting-up Kubernetes on Ubuntu.
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
Now that kubeadm is installed on both nodes, disable the swap, set the ip_forward flag to 1, and set modprobe br_netfilter. Setting the ip_forward flag and modprobe br_netfilter are already done if Docker is used as the CRI. The ip_forward flag is used to enable packet forwarding, and the br_netfilter is required to enable transparent masquerading and to facilitate Virtual Extensible LAN (VxLAN) traffic for communication between the Kubernetes pods.
sudo swapoff -a
sudo sysctl -w net.ipv4.ip_forward=1
sudo modprobe br_netfilter
On both nodes, start and enable containerd and set the critcl to point to the containerd runtime socket.
sudo systemctl start containerd
sudo systemctl enable containerd
sudo crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock
Initialize the cluster on the control-node and set the pod network to 10.100.0.0/16 so it doesn't conflict with the host network.
sudo kubeadm init --pod-network-cidr=10.100.0.0/16
After the initialization finished, create the .kube directory, copy the kube config file to the .kube directory, and set the permissions to the current user.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Take the discover token that was generated during the initialization and run the join command with it on the worker node.
领英推荐
sudo kubeadm join 192.168.0.18:6443 --token jxzoqt.fvob6w1bhrj4hkk7 \? ? ? ? --discovery-token-ca-cert-hash sha256:60adad807d0d50249e93b81864397f277610661d4dfddc567ecd07d779b63b48
Check the status of the nodes from the control node.
kubectl get nodes
The output should look similar to this.
notarealuser@control:~$ kubectl get node
NAME? ? ? STATUS ? ROLES ? ? ? ? ? AGE ? VERSION
userv ? ? Ready? ? control-plane ? 39m ? v1.25.4
worker1 ? Ready? ? <none>? ? ? ? ? 35m ? v1.25.4
notarealuser@control:~$?
Install the Tigera Calico operator and custom resource definitions. From the Calico docs, "the operator provides lifecycle management for Calico exposed via the Kubernetes API defined as a custom resource definition."
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/tigera-operator.yaml
Since the pod network is customized, the Calico custom-resources.yaml file needs to be edited and the CIDR needs set to the pod network that was set during initialization.
wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/custom-resources.yaml
The file should look like this after it's edited.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installati
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
? name: default
spec:
? # Configures Calico networking.
? calicoNetwork:
? ? # Note: The ipPools section cannot be modified post-install.
? ? ipPools:
? ? - blockSize: 26
? ? ? cidr: 10.100.0.0/16
? ? ? encapsulation: VXLANCrossSubnet
? ? ? natOutgoing: Enabled
? ? ? nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
? name: default
spec: {}
Create the custom resources for the Calico CNI.
kubectl create -f custom-resources.yaml?
Now Kubernetes is running with the Calico CNI
Here are the links to the tutorials:
Multi-Cloud Security Architect | AI/ML Security in Azure, K8S, AWS, GCP Certified
2 年Great book I finished last month