EKS managed Node Groups — what are they and how they work?

EKS managed Node Groups — what are they and how they work?


If you are spinning up a Kubernetes Cluster on Amazon EKS in your enterprise and using Amazon AMIs then EKS Managed Node Groups are the best bet in order to keep your applications up and running during a K8 cluster upgrade or EMI patching / release updates on EC2 worker nodes. And this feature is free of cost and highly recommended when using Amazon EMIs.

EKS Managed Node Groups automates the provisioning and lifecycle management of EC2 worker nodes in a Kubernetes Cluster. There is no App downtime, no user overhead of user managed orchestration to make applications highly available when worker node (EC2) AMIs are getting updated.

EKS Managed Node Groups integrate with Amazon EC2 Auto-scaling to automatically adjust number of worker nodes based on workload demands.

This is all achieved with a single click or an API call and respects Pod Disruption Budget, which is the minimum number of Pods a Kubernetes Cluster has to maintain and running during any disruptions or upgrades.

Let’s run through a Kubernetes Cluster creation on EKS with a EKS managed Node group and perform a rolling upgrade of the Kubernetes Cluster with worker nodes and observe how Managed Node groups ensure the Pods are kept up and running during a successful upgrade.

Node groups can be created from AWS Console, CLI or using Infra as code tools like Terraform or AWS CloudFormation.

The syntax to create a managed Node group along with a EKS Cluster creation from AWS CLI is

#>eksctl create cluster --name <cluster-name> --version <target-version> --node-group-name <nodegroup-name> --node-type <ami-type> --nodes <number of nodes> --managed        

In this demo, Kubernetes version 1.31 (one version behind, the latest stable Kubernetes version is 1.32) will be deployed in the EKS cluster and Worker nodes. AMI type t3.micro (free tier) is used, and number of worker nodes is four.

 #> eksctl create cluster --name eks-demo --version 1.31 --nodegroup-name eks-demo --node-type t3.micro --nodes 4 --managed        

EKS Cluster creation in progress

Once the K8 cluster is created, deploy a simple app.

In the AWS Console, under EKS, a cluster is deployed with version Kubernetes 1.31 with a EKS Managed node group version 1.31 with 4 worker nodes running.

Since we are one version behind the latest stable K8 version, let’s proceed to upgrade the Cluster Control Plane first.

Click Upgrade now. The target version would be the latest 1.32.

Cluster upgrade in progress…

Once upgrade is completed, cluster is upgraded to Kubernetes version 1.32.

The Node group is still at 1.31. The worker nodes can run only until two versions behind the Cluster version.

Notification that a ‘New AMI release version is available for this node group’ illustrated.

Node groups can be updated either from the AWS Console or CLI.

#> eksctl upgrade nodegroup --name=eksdemo --cluster=eks-demo        

Let’s do it from the Console. Click Update now.

There are two ways to upgrade — Rolling or Force update.

Rolling Update: Respects Pod Disruption Budgets for the cluster. Update will not proceed if EKS is unable to gracefully drain pods running on this node group.

Force Update: Does not respect Pod Disruption Budgets and forces node restarts. This may cause disruption to running applications.

Select the Rolling Update option and click Update.

Since EKS integrates with EC2 Auto Scaling groups to adjust worker node demands during a rolling upgrade, additional EC2 instances are spun (upto 6 in this case) to handle downtime of worker nodes that will be upgraded.

This can be observed in the EC2 console.

#> kubectl get nodes        

New worker nodes are deployed with the target 1.32 version while the existing nodes 1.32 are scheduled to be disabled and decommissioned automatically ensuring no app downtime.

During this, there number of pods running remains unchanged and undisrupted (check downtime).

Once the upgrade completes, all EC2 worker nodes will be at the target version 1.32.

Use Cases:

  • Running production workloads with minimal manual intervention.
  • Scaling applications dynamically based on demand.
  • Simplifying cluster management for teams with limited Kubernetes expertise.

Limitations:

  • Managed Node Groups are specific to EKS and cannot be used with self-managed Kubernetes clusters.
  • Advanced customization may require the use of launch templates or self-managed nodes.

In summary, EKS Managed Node Groups provide a streamlined way to manage worker nodes in an EKS cluster, making it easier to run and scale Kubernetes applications on AWS!!!

Happy learning!!!

Daniel Sullivan

Assoc Director at NightWing | Hands-On AWS Systems Engineer and Cloud Architect | U.S. Navy Veteran | Cyber Security Certified | Ex-Raytheon | Active Clearance

5 天前

Thank for sharing this concept - I will try it

回复
Raghunath Erumal

Cloud Operations | CKA, AWS SAA

6 天前

Very helpful Shyam Easwar Nice way to explain seamless upgrades achieved on EKS control and worker nodes .... To complete the topic, " How to Identify and remediate removed API usage in your k8s resources before upgrading the control plane" , cluster insights is the way to the tackle this -->https://docs.aws.amazon.com/eks/latest/best-practices/cluster-upgrades.html#identify-and-remediate-removed-api-usage-before-upgrading-the-control-plane

要查看或添加评论,请登录

Shyam Easwar的更多文章

社区洞察