Comprehensive Guide to Setting Up a Kubernetes Cluster Using kubeadm

Comprehensive Guide to Setting Up a Kubernetes Cluster Using kubeadm

Introduction

Kubernetes is an open-source platform for automating deployment, scaling, and management of containerized applications. This guide demonstrates how to set up a Kubernetes cluster using kubeadm on a group of Linux servers (Ubuntu in this guide). We assume you have one control plane node and two worker nodes.

Prerequisites

  • Hardware/VM Requirements: At least 1 control plane node and 1 or more worker nodes.
  • Operating System: Ubuntu 20.04 (or similar Debian-based system).
  • Networking: Private IP addresses assigned to each node.
  • User Access: Root or sudo privileges on each node.
  • Internet Connectivity: For downloading packages and images.

Environment Setup

Set Hostnames and /etc/hosts

On each node (control plane and workers):

Set the Hostname For example, on the control plane:

sudo hostnamectl set-hostname control-node        

On worker nodes, use unique hostnames (e.g., worker-node1, worker-node2):

sudo hostnamectl set-hostname worker-node1 # and on the second worker: 
sudo hostnamectl set-hostname worker-node2        

Configure /etc/hosts — Edit the /etc/hosts file on each node so they can resolve each other’s hostnames using their private IP addresses. For example:

10.0.0.10   control-node 
10.0.0.11   worker-node1 
10.0.0.12   worker-node2        

(Replace the IP addresses with your actual private IPs.) Log out and log back in to ensure the changes take effect.

Installing and Configuring containerd

Containerd is a popular container runtime. The following steps should be run on all nodes (control plane and workers).

Load Required Kernel Modules

Create a file to ensure required kernel modules load at boot:

sudo tee /etc/modules-load.d/containerd.conf <<EOF 
overlay 
br_netfilter EOF        

Load the modules immediately:

sudo modprobe overlay 
sudo modprobe br_netfilter        

Set Sysctl Parameters — Create a sysctl configuration file for Kubernetes networking:

sudo tee /etc/sysctl.d/k8s.conf <<EOF 
net.bridge.bridge-nf-call-iptables = 1 
net.bridge.bridge-nf-call-ip6tables = 1 
net.ipv4.ip_forward = 1 EOF        

Apply the changes immediately:

sudo sysctl --system        

Install containerd — Update package lists and install containerd:

sudo apt-get update && sudo apt-get install containerd -y        

Create the containerd configuration directory and generate the default configuration:

sudo mkdir -p /etc/containerd sudo containerd config default | sudo tee /etc/containerd/config.toml        

Restart containerd and check its status:

sudo systemctl restart containerd sudo systemctl status containerd        

Disabling Swap

Kubernetes requires that swap be disabled on all nodes. Run the following on every node:

sudo swapoff -a        

To prevent swap from re-enabling on reboot, remove or comment out any swap entries in /etc/fstab.

Installing Kubernetes Components

This section explains how to install kubeadm, kubelet, and kubectl on both the control plane and worker nodes. We will install Kubernetes v1.31.

For Both Control Plane and Worker Nodes

Update the apt Package Index and Install Prerequisites

sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl gpg        

Create the Keyrings Directory — (This step is needed on older releases where /etc/apt/keyrings does not exist.)

sudo mkdir -p -m 755 /etc/apt/keyrings        

Download the Public Signing Key for Kubernetes v1.31

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg        

Add the Kubernetes apt Repository — This command will create (or overwrite) the repository file:

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list        

Update Package Lists and Install Kubernetes Components

sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl        

Pin the Package Versions — To prevent unwanted upgrades, run:

sudo apt-mark hold kubelet kubeadm kubectl        

(Optional) Enable the kubelet Service Immediately

sudo systemctl enable --now kubelet        

Initializing the Control Plane

Perform these steps only on the Control Plane node.

Initialize the Cluster — Choose a Pod network CIDR that matches your network plugin. For example, using Calico’s recommended CIDR:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16        

Set Up kubeconfig for kubectl — Configure your user account to use the cluster credentials:

mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
sudo chown $(id -u):$(id -g) $HOME/.kube/config        

Deploy a Network Plugin — Here’s how to deploy Calico (a popular network plugin):

kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml        

Generate the Join Command for Worker Nodes — To allow worker nodes to join your cluster, run:

kubeadm token create --print-join-command        

Copy the output join command; it will look similar to:

kubeadm join 10.0.0.10:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>        

Joining Worker Nodes to the Cluster

On each Worker Node, perform the following:

Set the Hostname and Configure /etc/hosts — (As described in the Environment Setup section.) Ensure containerd, Kubernetes components, and swap disabling are already completed (see above). Join the Cluster Using the join command obtained from the control plane, run:

sudo kubeadm join 10.0.0.10:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>        

Replace <token> and <hash> with the actual values from your control plane output.

Verification and Troubleshooting

Verify Cluster Node — On the control plane node, run:

kubectl get nodes        

You should see the control plane and worker nodes listed with a status of Ready.

Troubleshooting Tips:

  • If nodes are not showing up, check the kubeadm join output for errors.
  • Run kubectl describe node <node-name> to see if there are taints preventing scheduling.
  • Check logs with journalctl -u kubelet on any problematic node.
  • Verify network connectivity between nodes and that the proper ports (like 6443 for the API server) are open.



要查看或添加评论,请登录

Steffin Issac的更多文章

社区洞察

其他会员也浏览了