Mastering High Availability Kubernetes: A Comprehensive Approach to Enhancing Resilience through Nginx Integration
@MicrosoftDesigner

Mastering High Availability Kubernetes: A Comprehensive Approach to Enhancing Resilience through Nginx Integration

Kubernetes NGINX HAProxy Technologies


In the landscape of container orchestration, Kubernetes stands as a cornerstone for dynamic application management and scaling. Achieving high availability within our Kubernetes infrastructure is imperative for sustaining operational continuity and resilience. This comprehensive guide will navigate us through the intricate process of deploying a Kubernetes cluster comprising 3 master nodes, 3 worker nodes, and 1 High Availability (HA) node, all while integrating Nginx to provide advanced load balancing capabilities. Let’s embark on this journey together!

?

Prerequisites

Before we start, ensure we have the following:

1.????? Operating System: Ubuntu 24.04 on all nodes.

2.????? Nodes for creating Cl: 1 Bastion, 3 Masters, 3 Workers

3.????? SSH Access: SSH connectivity from the bastion host to all nodes.

?

?1. Configure the Bastion Host

a. System Update:

? >> ?????? sudo apt update && sudo apt upgrade -y

?

b. Install Required Packages:

? >> ?????? sudo apt install -y curl apt-transport-https

?

c. Install Kubernetes CLI (`kubectl`):

? >> ?????? curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo touch /etc/apt/sources.list.d/kubernetes.list

echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/

sources.list.d/kubernetes.list

? >> ?????? sudo apt update

? >> ?????? sudo apt install -y kubectl

?

?2. Set Up the Master Nodes

a. Install Docker and Kubernetes Components:

? >> ?????? sudo apt update

? >> ?????? sudo apt install -y docker.io

? >> ?????? sudo apt install -y kubelet kubeadm kubectl

? >> ?????? sudo apt-mark hold kubelet kubeadm kubectl

?

b. Initialize the Kubernetes Cluster on the Primary Master Node:

>> ?????? sudo kubeadm init --control-plane-endpoint "<nginx-ha-ip>:<port>" --upload-certs --pod-network-cidr=10.244.0.0/16

Replace <nginx-ha-ip>:<port> with your Nginx HA IP and port (usually 6443).

?

c. Set Up kubectl for Access:

? >> ?????? mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

?

d. Deploy a Network Plugin (Flannel):

>> ?????? kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

?

e. Join Additional Master Nodes:

Execute the join command provided by the kubeadm init the output of the first master node on each additional master node.

?

3. Configure the Worker Nodes

a. Install Docker and Kubernetes Components:

? >> ?????? sudo apt update

? >> ?????? sudo apt install -y docker.io

? >> ?????? sudo apt install -y kubelet kubeadm kubectl

? >> ?????? sudo apt-mark hold kubelet kubeadm kubectl

?

b. Join Worker Nodes to the Cluster:

Run the join command from the kubeadm init the output of the master node on each worker node.

?

?4. Set Up High Availability (HA) with Nginx

a. Install Nginx on the HA Node:

? >> ?????? sudo apt update

? >> ?????? sudo apt install -y nginx

?

b. Configure Nginx for Load Balancing:

Edit the Nginx configuration file (`/etc/nginx/nginx.conf`):

?

http {

??? upstream kubernetes-api {

??????? server Master1_IP:6443;

??????? server Master2_IP:6443;

??????? server Master3_IP:6443;

??? }

?

??? server {

??????? listen 6443;

?

??????? location / {

??????????? proxy_pass https://kubernetes-api;

??????????? proxy_set_header Host $host;

??????????? proxy_set_header X-Real-IP $remote_addr;

??????????? proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

??????????? proxy_set_header X-Forwarded-Proto $scheme;

??????? }

??? }

}

?

Restart Nginx:

? >> ?????? ?sudo systemctl restart nginx

?

c. Update Master Nodes to Use the Nginx Endpoint:

Modify the Kubernetes configuration files and other relevant settings to point to the Nginx HA IP and port.

?

?5. Verify the Cluster

a. Check Cluster Health:

? >> ?????? ?kubectl get nodes

? >> ?????? ?kubectl get pods --all-namespaces

?

b. Test our Deployment:

Deploy a sample application to ensure that the cluster is operating as expected and that Nginx is correctly load-balancing traffic.

?

Conclusion

Following these steps, we will establish a highly available Kubernetes cluster enhanced with Nginx for effective load balancing. This setup ensures resilience and high availability and optimizes traffic distribution across our master nodes.

要查看或添加评论,请登录

Swapnil G.的更多文章

社区洞察

其他会员也浏览了