Kubernetes Cluster on AWS ec2 instances: Ansible

Kubernetes Cluster on AWS ec2 instances: Ansible

As we all know, Kubernetes is an open source software that allows us to deploy and manage containerized applications at scale.

So in this artical we are going to setup a Kubernetes Cluster on Amazon Linux 2 EC2 compute instance which will run containers on those instances with processes for deployment, maintenance, and scaling on AWS cloud using the great automation tool Ansible.

Pre-requisites:

  • Ansible configured on your Base OS.
  • A new user having Administrator Access. Note: Ansible requires administrator access user access key and secret key so that ansible can used for configuration.
  • Private key in .pem extension.

We'll see how to set up a Kubernetes cluster with 2 Worker Nodes and 1 Master Node on Amazon Linux 2 Server's. We will do this configuration using the Ansible roles where the "kubeadm" tool is used to set up the cluster. Kubeadm is a tool built to provide "kubeadm init" and "kubeadm join" for creating Kubernetes clusters.

So let's start doing step by step.... 

Launching ec2 instances

For this we have to give all the configuration values such as private key, remote-user, roles path, inventory path, etc in the ansible.cfg file in /etc/ansible/ location.

No alt text provided for this image

And this is the playbook "ec2_instance.yml" for launching the instances in AWS.

- hosts: localhost
  vars_files: aws_keys.yml
  vars:
          region: "us-east-1"
          access: "{{ aws_access }}"
          secret: "{{ aws_secret }}"
          aws_key: "myansiblekey"
          instance_type: "t2.micro"
          ami: "ami-047a51fa27710816e"
          vpc_subnet: "subnet-88a922ee"
          security: "mysecurity" 
          slave_count: "2"
  tasks:
  - ec2:
            key_name: "{{ aws_key }}"
            instance_type: "{{ instance_type }}"
            image: "{{ ami }}"
            wait: yes
            wait_timeout: 500
            group: "{{ security }}"
            instance_tags:
                    name: "{{ item.name }}"
            count_tag:
                    name: "{{ item.name }}"
            exact_count: "{{ item.count }}"
            vpc_subnet_id: "{{ vpc_subnet }}"
            assign_public_ip: yes
            region: "{{ region }}"
            aws_access_key: "{{ access }}"
            aws_secret_key: "{{ secret }}"
    loop:
            - {name: "k8s_Master", count: "1" }
            - {name: "k8s_Slave", count: "{{ slave_count }}"

Here "aws_key.yml" is a vault file contains my AWS account IAM user's access key and secret key in the variables "aws_access" and "aws_secret". I've given the slave_count value as 2, if you want more worker node you can increase the same.

Now we'll run this playbook in our localhost.

No alt text provided for this image

Configuring dynamic inventory

Now for further doing the configuration on these instances we need their public IP's. So we'll download one role which will get our instances IP dynamically and by using the tags we are going to configure the ec2 instances.

No alt text provided for this image

Read the README.md file and give the same IAM user access key and secret key in /var/credentials.yml file.

No alt text provided for this image
No alt text provided for this image

After giving the credentials I've created a playbook including this role.

No alt text provided for this image

and run locally using "ansible-playbook demo.yml command."

No alt text provided for this image

After we run this playbook we can get the tags and IP's

No alt text provided for this image

Here we see we get a new ansible.cfg file and hosts folder. Our old ansible.cfg file is backed up. Now by running the ec2.py file we get the tags we specified while launching the instances, with there respective ip's.

No alt text provided for this image

Configuring k8s Master and Slave

Now for configuring the Kubernetes master and slave on AWS instances we will install two more roles.

No alt text provided for this image

 Creating the main.yml file and giving the same tags as host and run this playbook.

No alt text provided for this image

Before running this playbook make sure that you delete your new config file or uncomment and give the configuration values in the same.

No alt text provided for this image

In my case I'm deleting the ansible.cfg file and renaming the backed up file.

No alt text provided for this image
No alt text provided for this image

After the master role is setup we get our join token so copy this somewhere for once.

No alt text provided for this image

So now our Kubernetes cluster is configured successfully on AWS cloud. Now I'm direct connecting to my Master node.

No alt text provided for this image

As we can see by using the "kubectl get nodes" command we only get our master. So here the use of join token plays a role. By chance if you forget to copy the token while running the playbook we can again get the same token using this command. Now copy and paste in both the Worker nodes.

No alt text provided for this image

After coping the token, if we use the same command again let's see what happens.....

No alt text provided for this image

So now we can see all the three nodes in ready state which means we have configured the Kubernetes cluster successfully.

For checking this let's launch a pod named as "demo".

No alt text provided for this image

So, our Kubernetes cluster is setup perfectly by using the Ansible tool. 

Hope you find this helpfull...!

Thank you

Aditya Kumar

Senior Software Engineer @ Samsung | YouTuber | 20+ Booking @ Topmate | Building CodeOps

4 年

Well done ?? Komal Suthar

回复

要查看或添加评论,请登录

Komal Suthar的更多文章

社区洞察

其他会员也浏览了