Deploying a Kubernetes Cluster with Amazon EKS
Rounak Surana
?? SRE/DevOps Evangelist | Driving Cloud Infrastructure Excellence | Kubernetes & Terraform Expert
In 2018, AWS, Oracle, Microsoft, VMware and Pivotal all joined the CNCF as part of jumping on the Kubernetes bandwagon. This adoption by enterprise giants is coupled by a meteoric rise in usage and popularity.
Yet despite all of this, the simple truth is that Kubernetes is hard.
Yes, recent versions have made deploying and handling a Kubernetes cluster simpler but there are still some obstacles disrupting wider adoption. Even once you’ve acquainted yourself with pods, services and replication controllers, you still need to overcome networking, load balancing and monitoring. And that’s without mentioning security. Hence, there comes the need of hosted and managed Kubernetes services, and in this article I'd like to show and providing a chance to those who are interested in trying out AWS-EKS to get a Kubernetes Cluster up and running.
Warning! AWS charges $0.10 per hour for each EKS cluster. As a result, you may be charged to run these examples. The most you should be charged should only be a few dollars, but we're not responsible for any charges that may incur.
What is Amazon EKS?
Amazon EKS (Elastic Container Service for Kubernetes) is a managed Kubernetes service that allows you to run Kubernetes on AWS without the hassle of managing the Kubernetes control plane.
The Kubernetes control plane plays a crucial role in a Kubernetes deployment as it is responsible for how Kubernetes communicates with your cluster — starting and stopping new containers, scheduling containers, performing health checks, and many more management tasks.
The big benefit of EKS, and other similar hosted Kubernetes services, is taking away the operational burden involved in running this control plane. You deploy cluster worker nodes using defined AMIs and with the help of CloudFormation, and EKS will provision, scale and manage the Kubernetes control plane for you to ensure high availability, security and scalability.
In this article, we will deploy an EKS Cluster using Terraform.
Why deploy with Terraform?
While you could use the built-in AWS provisioning processes (UI, CLI, CloudFormation) for EKS clusters, Terraform provides you with several benefits:
- Unified Workflow - If you are already deploying infrastructure to AWS with Terraform, your EKS cluster can fit into that workflow. You can also deploy applications into your EKS cluster using Terraform.
- Full Lifecycle Management - Terraform doesn't only create resources, it updates, and deletes tracked resources without requiring you to inspect the API to identify those resources.
- Graph of Relationships - Terraform understands dependency relationships between resources. For example, if an AWS Kubernetes cluster needs a specific VPC and subnet configurations, Terraform won't attempt to create the cluster if the VPC and subnets failed to create with the proper configuration.
Prerequisites
I assume some basic familiarity with Kubernetes and kubectl but does not assume any pre-existing deployment.
I also assumes that you are familiar with the usual Terraform plan/apply workflow. If you're new to Terraform itself, refer first to the Getting Started guide.
For this, you will need:
- an AWS account with the IAM permissions listed on the EKS module documentation,
- a configured AWS CLI
- AWS IAM Authenticator
- kubectl
- wget (required for the eks module)
We will first launch an AWS Ubuntu ec2-instance(named: Bastion Box in the article) which will act as a BootStrap node and configure AWS-CLI, AWS IAM Authenticator, kubectl on it. Launch the Bastion Box with vCPUs >= 2. All the kubectl commands we will be running from that node. Here is a good article where you can find ready-to-use commands that will help you here.
Install the AWS CLI version 2 on Linux
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install OR sudo ./aws/install -i /usr/local/aws-cli -b /usr/local/bin #Confirm the Installation aws --version
Install aws-iam-authenticator on Linux
curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.9/2020-07-08/bin/linux/amd64/aws-iam-authenticator chmod +x ./aws-iam-authenticator mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc #Test if the aws-iam-authenticator binary works aws-iam-authenticator help
Install kubectl on Linux
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl #Confirm the Installation kubectl version --client
Install Terraform on Linux
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main" sudo apt-get update && sudo apt-get install terraform #Verify the Installation terraform -help #Enable tab completion terraform -install-autocomplete
Clone GitHub Repository
Now, clone and add my github repository from here consisting of Terraform configuration files for the next steps.
git clone https://github.com/Rounak-Surana/provision-eks-cluster-with-terraform.git #You can explore this repository by changing directories cd provision-eks-cluster-with-terraform/
In here, you will find six files used to provision a VPC, security groups and an EKS cluster. The final product should be similar to this:
- vpc.tf: provisions a VPC, subnets and availability zones using the AWS VPC Module. A new VPC is created for this guide so it doesn't impact your existing cloud environment and resources.
- security-groups.tf: provisions the security groups used by the EKS cluster.
- eks-cluster.tf: provisions all the resources (AutoScaling Groups, etc...) required to set up an EKS cluster in the private subnets and bastion servers to access the cluster using the AWS EKS Module.
On line 14, the AutoScaling group configuration contains three nodes. You can change the number of worker nodes(asg_desired_capacity) and instance.type according to your need.
worker_groups = [ { name = "worker-group-1" instance_type = "t2.small" additional_userdata = "echo foo bar" asg_desired_capacity = 2 additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id] }, { name = "worker-group-2" instance_type = "t2.medium" additional_userdata = "echo foo bar" additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id] asg_desired_capacity = 1 }, ]
- outputs.tf: defines the output configuration.
- versions.tf: sets the Terraform version to at least 0.12. It also sets versions for the providers used in this sample.
Initialize Terraform workspace
#Change directory to the cloned directory cd provision-eks-cluster-with-terraform/ #Initiatlise Terraform terraform init Initializing modules... Downloading terraform-aws-modules/eks/aws 9.0.0 for eks... - eks in .terraform/modules/eks/terraform-aws-modules-terraform-aws-eks-908c656 - eks.node_groups in .terraform/modules/eks/terraform-aws-modules-terraform-aws-eks-908c656/modules/node_groups Downloading terraform-aws-modules/vpc/aws 2.6.0 for vpc... - vpc in .terraform/modules/vpc/terraform-aws-modules-terraform-aws-vpc-4b28d3d Initializing the backend... Initializing provider plugins... - Checking for available provider plugins... - Downloading plugin for provider "template" (hashicorp/template) 2.1.2... - Downloading plugin for provider "kubernetes" (hashicorp/kubernetes) 1.10.0... - Downloading plugin for provider "aws" (hashicorp/aws) 2.52.0... - Downloading plugin for provider "random" (hashicorp/random) 2.2.1... - Downloading plugin for provider "local" (hashicorp/local) 1.4.0... - Downloading plugin for provider "null" (hashicorp/null) 2.1.2... Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
Provision the EKS cluster
terraform apply module.eks.data.aws_partition.current: Refreshing state... module.eks.data.aws_caller_identity.current: Refreshing state... module.eks.data.aws_ami.eks_worker: Refreshing state... data.aws_availability_zones.available: Refreshing state... module.eks.data.aws_iam_policy_document.cluster_assume_role_policy: Refreshing state... module.eks.data.aws_ami.eks_worker_windows: Refreshing state... module.eks.data.aws_iam_policy_document.workers_assume_role_policy: Refreshing state... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: ## output truncated ... Plan: 51 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
You can see this terraform apply will provision a total of 51 resources (VPC, Security Groups, AutoScaling Groups, EKS Cluster, etc...). If you're comfortable with this, confirm the run with a "yes". This process should take approximately 10 minutes. Upon successful application, your terminal prints the outputs defined in outputs.tf.
Apply complete! Resources: 51 added, 0 changed, 0 destroyed. Outputs: cluster_endpoint = https://A1ADBDD0AE833267869C6ED0476D6B41.gr7.us-east-2.eks.amazonaws.com cluster_security_group_id = sg-084ecbab456328732 kubectl_config = apiVersion: v1 preferences: {} kind: Config clusters: - cluster: server: https://A1ADBDD0AE833267869C6ED0476D6B41.gr7.us-east-2.eks.amazonaws.com certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ETXdPVEU0TXpVeU1sb1hEVE13TURNd056RTRNelV5TWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTThkClZaN1lmbjZmWm41MEgwL0d1Qi9lRmVud2dydXQxQlJWd29nL1JXdFpNdkZaeStES0FlaG5lYnR5eHJ2VVZWMXkKTXVxelBiMzgwR3Vla3BTVnNTcDJVR0ptZ2N5UVBWVi9VYVZDQUpDaDZNYmIvL3U1bWFMUmVOZTBnb3VuMWlLbgpoalJBYlBJM2JvLzFPaGFuSXV1ejF4bmpDYVBvWlE1U2N5MklwNnlGZTlNbHZYQmJ6VGpESzdtK2VST2VpZUJWCjJQMGd0QXJ3alV1N2MrSmp6OVdvcGxCcTlHZ1RuNkRqT1laRHVHSHFRNEpDUnRsRjZBQXpTUVZ0cy9aRXBnMncKb2NHakd5ZE9pSmpMb1NsYU9weDIrMTNMbHcxMDAvNmY4Q0F2ajRIbFZUZDBQOW5rN1UyK04xNSt5VjRpNjFoQgp3bHl4SXFUWEhDR0JvYmRNNE5VQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIbEI3bGVMTnJYRXZnNksvNUdtR2s5Tlh4SUkKRDd0Y1dkTklBdnFka1hWK3ltVkxpTXV2ZGpQVjVKV3pBbEozTWJqYjhjcmlQWWxnVk1JNFJwc0N0aGJnajMzMwpVWTNESnNxSmZPUUZkUnkvUTlBbHRTQlBqQldwbEtaZGc2dklxS0R0eHB5bHovOE1BZ1RldjJ6Zm9SdzE4ZnhCCkI2QnNUSktxVGZCNCtyZytVcS9ULzBVS1VXS0R5K2gyUFVPTEY2dFVZSXhXM2RncWh0YWV3MGJnQmZyV3ZvSW8KYitSOVFDTk42UHRQNEFFQSsyQnJYYzhFTmd1M2EvNG9rN3lPMjZhTGJLdC9sbUNoNWVBOEdBRGJycHlWb3ZjVgpuTGdyb0FvRnVRMCtzYjNCTThUcEtxK0YwZ2dwSFptL3ZFNjh5NUk1VFlmUUdHeEZ6VEVyOHR5NHk1az0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= name: eks_training-eks-TNajBRIF contexts: - context: cluster: eks_training-eks-TNajBRIF user: eks_training-eks-TNajBRIF name: eks_training-eks-TNajBRIF current-context: eks_training-eks-TNajBRIF users: - name: eks_training-eks-TNajBRIF user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 command: aws-iam-authenticator args: - "token" - "-i" - "training-eks-TNajBRIF" region = us-east-2
Configure kubectl
Now that you've provisioned your EKS cluster, you need to configure kubectl. Customize the following command with your cluster name and region, the values from Terraform's output. It will get the access credentials for your cluster and automatically configure kubectl.
aws eks --region us-east-2 update-kubeconfig --name training-eks-TNajBRIF
The Kubernetes cluster name and the region correspond to the output variables showed after the successful Terraform run. You can also see the cluster name on AWS Management Console under EKS service. Also, you can view these outputs on cli again by running:
terraform output
Clean up your workspace
Congratulations, you have provisioned an EKS cluster. Make sure to destroy any resources you create once you are done with this guide. Run the "destroy" command and confirm with "yes" in your terminal.
terraform destroy
With the deletion of the EKS cluster, you will see that the worker nodes which rolled up automatically with the cluster creation will also gets terminated. Also, don't forget to delete the Bastion-Box node if you're thinking no longer to work with it.
My GitHub Repo: https://github.com/Rounak-Surana/provision-eks-cluster-with-terraform
My LinkedIn Profile: https://www.dhirubhai.net/in/rounaksurana/
I hope you enjoyed and learned the proper way of creating Kubernetes Cluster on AWS with EKS. Please share with your friends and connections if you find it useful and worth reading.
For any queries, please feel free to reach out to me on [email protected] or Linkedin.
https://youtu.be/mnbwJdv6ZH0
Senior Consultant | EY GDS | Experienced AI Specialist | Expertise in Generative AI, LLM Models, Responsible AI Integration, and Smart Data Pipelines in Azure | Currently Focused on ML and Operations
4 年Good job Rounak Surana
Chief of Staff @Capgemini | Innovation and Transformation Lead | Strategic Partner | MBA, NMIMS Mumbai | BTech CS, Delhi University
4 年Good going Rounak Surana. These will definitely be beneficial for us
Salesforce Certified B2C Commerce Developer(SFCC) | Consultant at Capgemini
4 年Rounak Surana I have read your article and it's really interesting and worthfully for me. I had already done this task last month.