Kubernetes Cluster on AWS ec2 instances: Ansible
As we all know, Kubernetes is an open source software that allows us to deploy and manage containerized applications at scale.
So in this artical we are going to setup a Kubernetes Cluster on Amazon Linux 2 EC2 compute instance which will run containers on those instances with processes for deployment, maintenance, and scaling on AWS cloud using the great automation tool Ansible.
Pre-requisites:
- Ansible configured on your Base OS.
- A new user having Administrator Access. Note: Ansible requires administrator access user access key and secret key so that ansible can used for configuration.
- Private key in .pem extension.
We'll see how to set up a Kubernetes cluster with 2 Worker Nodes and 1 Master Node on Amazon Linux 2 Server's. We will do this configuration using the Ansible roles where the "kubeadm" tool is used to set up the cluster. Kubeadm is a tool built to provide "kubeadm init" and "kubeadm join" for creating Kubernetes clusters.
So let's start doing step by step....
Launching ec2 instances
For this we have to give all the configuration values such as private key, remote-user, roles path, inventory path, etc in the ansible.cfg file in /etc/ansible/ location.
And this is the playbook "ec2_instance.yml" for launching the instances in AWS.
- hosts: localhost vars_files: aws_keys.yml vars: region: "us-east-1" access: "{{ aws_access }}" secret: "{{ aws_secret }}" aws_key: "myansiblekey" instance_type: "t2.micro" ami: "ami-047a51fa27710816e" vpc_subnet: "subnet-88a922ee" security: "mysecurity" slave_count: "2" tasks: - ec2: key_name: "{{ aws_key }}" instance_type: "{{ instance_type }}" image: "{{ ami }}" wait: yes wait_timeout: 500 group: "{{ security }}" instance_tags: name: "{{ item.name }}" count_tag: name: "{{ item.name }}" exact_count: "{{ item.count }}" vpc_subnet_id: "{{ vpc_subnet }}" assign_public_ip: yes region: "{{ region }}" aws_access_key: "{{ access }}" aws_secret_key: "{{ secret }}" loop: - {name: "k8s_Master", count: "1" } - {name: "k8s_Slave", count: "{{ slave_count }}"
Here "aws_key.yml" is a vault file contains my AWS account IAM user's access key and secret key in the variables "aws_access" and "aws_secret". I've given the slave_count value as 2, if you want more worker node you can increase the same.
Now we'll run this playbook in our localhost.
Configuring dynamic inventory
Now for further doing the configuration on these instances we need their public IP's. So we'll download one role which will get our instances IP dynamically and by using the tags we are going to configure the ec2 instances.
Read the README.md file and give the same IAM user access key and secret key in /var/credentials.yml file.
After giving the credentials I've created a playbook including this role.
and run locally using "ansible-playbook demo.yml command."
After we run this playbook we can get the tags and IP's
Here we see we get a new ansible.cfg file and hosts folder. Our old ansible.cfg file is backed up. Now by running the ec2.py file we get the tags we specified while launching the instances, with there respective ip's.
Configuring k8s Master and Slave
Now for configuring the Kubernetes master and slave on AWS instances we will install two more roles.
Creating the main.yml file and giving the same tags as host and run this playbook.
Before running this playbook make sure that you delete your new config file or uncomment and give the configuration values in the same.
In my case I'm deleting the ansible.cfg file and renaming the backed up file.
After the master role is setup we get our join token so copy this somewhere for once.
So now our Kubernetes cluster is configured successfully on AWS cloud. Now I'm direct connecting to my Master node.
As we can see by using the "kubectl get nodes" command we only get our master. So here the use of join token plays a role. By chance if you forget to copy the token while running the playbook we can again get the same token using this command. Now copy and paste in both the Worker nodes.
After coping the token, if we use the same command again let's see what happens.....
So now we can see all the three nodes in ready state which means we have configured the Kubernetes cluster successfully.
For checking this let's launch a pod named as "demo".
So, our Kubernetes cluster is setup perfectly by using the Ansible tool.
Hope you find this helpfull...!
Senior Software Engineer @ Samsung | YouTuber | 20+ Booking @ Topmate | Building CodeOps
4 年Well done ?? Komal Suthar