ANSIBLE ROLE TO CONFIGURE K8S MULTI NODE CLUSTER OVER AWS CLOUD.

ANSIBLE ROLE TO CONFIGURE K8S MULTI NODE CLUSTER OVER AWS CLOUD.

WHAT IS KUBERNETES?

Kubernetes also known as K8s, is an system which is used for Automation deployment, Scaling and Management of containerized applications. K8s Single Node Cluster means Single Master and Worker Node and K8s Multi Node Cluster means Single Master and many Worker Nodes i.e. High Availability Cluster.

No alt text provided for this image

WHAT IS ANSIBLE?

Ansible is the simplest way to automate the technological tools configuration i.e. mainly used for the Configuration Management. Ansible Role helps to configure the technology in very easily as it is kind of package used for managing codes in Ansible.

No alt text provided for this image

WHAT IS AMAZON WEB SERVICE?

AWS i.e. Amazon Web Services that provides the cloud based services and many other functionalities. It provides services like EC2-Instances, Database, Security, User, Load Balancing, etc. i.e. bundle of services are provided by this cloud.

No alt text provided for this image

DYNAMIC INVENTORY:-

The Dynamic Inventory is used as there are many servers that can be available in AWS EC2 Instances so we have to use dynamic inventory which divides the instances with tags, region, subnet id, security group id, instance type, etc.

Task Description:-

?? Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud.

?? Create Ansible Playbook to launch 3 AWS EC2 Instance

?? Create Ansible Playbook to configure Docker over those instances.

?? Create Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.

?? Convert Playbook into roles.

Let's start...

First create workspace /24 and there we have to create three roles i.e. aws, K8s-Master, K8s-WorkerNodes.

No alt text provided for this image

STEP 1 :- The provisioning of EC2 Instances with the help of Ansible Role.

ansible-galaxy init aws
        

In this role,

cd vars

gedit main.yml
        
No alt text provided for this image
cd tasks

gedit main.yml        
No alt text provided for this image

Here, IAM User lwuser1 is created which provides access key and secret key.

No alt text provided for this image

Now the role is directly put it into aws.yml file.

No alt text provided for this image

Now run the ansible playbook.

ansible-playbook aws.yml
        
No alt text provided for this image
No alt text provided for this image

Now we can see that one K8s-Master and two K8s-WorkerNodes EC2 Instances has been launched.

No alt text provided for this image

STEP 2 :- Now we have to configure one instance for Master and two for Worker Nodes.

Now let's come to the dynamic inventory part.

The dynamic inventory will separate the instances according to tags, region, public and private IPs, etc.

No alt text provided for this image

Ansible provides the two scripts for the dynamic inventory ec2.py and ec2.ini

wget https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.py

wget https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.ini

        

In these files, we have to some changes and then to run the program there are pre-requisites i.e. boto and boto3 should be there in the system.

pip/pip3 install boto/boto3
        

Now to successfully make an API call to AWS, we will need to configure Boto/Boto3 (the Python interface to AWS). There are a?variety of methods available, but the simplest is just to export two environment variables:-

export AWS_ACCESS_KEY_ID="access_key"

export AWS_SECRET_ACCESS_KEY="secret_key"
        

The other option is that we have to add the access key and secret key into ec2.ini and then have to make both the files executable.

chmod +x ec2.ini

chmod +x ec2.py
        

Now after launching instances we can see the hosts name that we can use by running ec2.py file.

python3 ec2.py
        

So we get tag_Name_K8s_Master and tag_Name_K8s_WorkerNodes to be used.

STEP 3 :- Create Role for configuring K8s-Master.

ansible-galaxy init K8s-Master
        

In this role,

cd vars

gedit main.yml
        
No alt text provided for this image
cd tasks 

gedit main.yml
        

Steps for the configuration of K8s-Master are as follows:-

  1. Installing docker and iproute-tc
  2. Configuring the Yum repo for kubernetes
  3. Installing kubeadm, kubelet and kubectl program
  4. Enabling the docker and kubernetes
  5. Pulling the config images
  6. Configuring the docker daemon.json file
  7. Restarting the docker service
  8. Configuring the Ip tables and refreshing the sysctl
  9. systemctl
  10. Starting kubeadm service
  11. Creating .kube Directory
  12. Copying file config file
  13. Installing Addons e.g flannel

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
cd files

gedit daemon.json

gedit k8s.conf        
No alt text provided for this image
No alt text provided for this image

STEP 4 :- Create Role for configuring K8s-WorkerNodes.

ansible-galaxy init K8s-WorkerNodes
        

In this role,

cd tasks

gedit main.yml
        

Steps for the configuration of K8s-WorkerNodes are as follows:-

  1. Installing docker and iproute-tc
  2. Configuring the Yum repo for kubernetes
  3. Installing kubeadm, kubelet and kubectl program
  4. Enabling the docker and kubernetes
  5. Pulling the config images
  6. Confuring the docker daemon.json file
  7. Restarting the docker service
  8. Configuring the Ip tables and refreshing sysctl
  9. systemctl

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
cd files

gedit daemon.json

gedit k8s.conf        
No alt text provided for this image
No alt text provided for this image

STEP 5 :- Put the K8s-Master and K8s-WorkerNodes Roles into one file name cluster.yml with the proper hosts name and then run the playbook.

gedit cluster.yml
        
No alt text provided for this image

In this file, the command for token create in master in which master provide private IP and port no. to join by worker nodes has been added.

Now run the playbook.

ansible-playbook cluster.yml
        
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

STEP 6 :- Check that whether configuration done or not manually.

kubectl get pods

kubectl get nodes
        
No alt text provided for this image
kubectl create deployment success --image=vimal13/apache-webserver-php

kubectl expose deployment success --port=80 --typr=NodePort

kubectl get pods -o wide        
No alt text provided for this image
kubectl get svc        
No alt text provided for this image

Now using public IP and exposed port no. of K8s-WorkerNodes we can see that pod has been launched inside one of the Worker Node.

No alt text provided for this image
No alt text provided for this image

STEP 7 :- Now put the roles in the Ansible Galaxy using GitHub Repository and for that create one directory.

No alt text provided for this image

In this task I have used my own created three ansible roles i.e. aws for provisioning 3 EC2 Instances, K8s-Master for configuration for Master and K8s-WorkerNodes for configuration of Worker Nodes to set the Multi Node Cluster over AWS.

Ansible Galaxy link:- 

https://galaxy.ansible.com/prachikakanodia/aws

https://galaxy.ansible.com/prachikakanodia/k8s_master

https://galaxy.ansible.com/prachikakanodia/k8s_workernodes
        

And thus, all the objectives of task has been completed.

Thankyou for reading my article!! ??

?? KEEP LEARNING. KEEP EXPLORING. ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了