ANSIBLE ROLE TO CONFIGURE K8S MULTI NODE CLUSTER OVER AWS CLOUD.
Udit Agarwal
Software Engineer | Python | GCP Cloud | Devops | Kubernetes | Grafana | AWS cloud | JAVA enthusiast | web developer | Docker | Rhel 8
WHAT IS KUBERNETES?
Kubernetes also known as K8s, is an system which is used for Automation deployment, Scaling and Management of containerized applications. K8s Single Node Cluster means Single Master and Worker Node and K8s Multi Node Cluster means Single Master and many Worker Nodes i.e. High Availability Cluster.
WHAT IS ANSIBLE?
Ansible is the simplest way to automate the technological tools configuration i.e. mainly used for the Configuration Management. Ansible Role helps to configure the technology in very easily as it is kind of package used for managing codes in Ansible.
WHAT IS AMAZON WEB SERVICE?
AWS i.e. Amazon Web Services that provides the cloud based services and many other functionalities. It provides services like EC2-Instances, Database, Security, User, Load Balancing, etc. i.e. bundle of services are provided by this cloud.
DYNAMIC INVENTORY:-
The Dynamic Inventory is used as there are many servers that can be available in AWS EC2 Instances so we have to use dynamic inventory which divides the instances with tags, region, subnet id, security group id, instance type, etc.
Task Description:-
?? Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud.
?? Create Ansible Playbook to launch 3 AWS EC2 Instance
?? Create Ansible Playbook to configure Docker over those instances.
?? Create Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.
?? Convert Playbook into roles.
Let's start...
First create workspace /24 and there we have to create three roles i.e. aws, K8s-Master, K8s-WorkerNodes.
STEP 1 :- The provisioning of EC2 Instances with the help of Ansible Role.
ansible-galaxy init aws
In this role,
cd vars
gedit main.yml
cd tasks
gedit main.yml
Here, IAM User lwuser1 is created which provides access key and secret key.
Now the role is directly put it into aws.yml file.
Now run the ansible playbook.
ansible-playbook aws.yml
Now we can see that one K8s-Master and two K8s-WorkerNodes EC2 Instances has been launched.
STEP 2 :- Now we have to configure one instance for Master and two for Worker Nodes.
Now let's come to the dynamic inventory part.
The dynamic inventory will separate the instances according to tags, region, public and private IPs, etc.
Ansible provides the two scripts for the dynamic inventory ec2.py and ec2.ini
wget https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.py
wget https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.ini
In these files, we have to some changes and then to run the program there are pre-requisites i.e. boto and boto3 should be there in the system.
pip/pip3 install boto/boto3
Now to successfully make an API call to AWS, we will need to configure Boto/Boto3 (the Python interface to AWS). There are a?variety of methods available, but the simplest is just to export two environment variables:-
export AWS_ACCESS_KEY_ID="access_key"
export AWS_SECRET_ACCESS_KEY="secret_key"
The other option is that we have to add the access key and secret key into ec2.ini and then have to make both the files executable.
chmod +x ec2.ini
chmod +x ec2.py
Now after launching instances we can see the hosts name that we can use by running ec2.py file.
python3 ec2.py
So we get tag_Name_K8s_Master and tag_Name_K8s_WorkerNodes to be used.
STEP 3 :- Create Role for configuring K8s-Master.
ansible-galaxy init K8s-Master
In this role,
领英推荐
cd vars
gedit main.yml
cd tasks
gedit main.yml
Steps for the configuration of K8s-Master are as follows:-
cd files
gedit daemon.json
gedit k8s.conf
STEP 4 :- Create Role for configuring K8s-WorkerNodes.
ansible-galaxy init K8s-WorkerNodes
In this role,
cd tasks
gedit main.yml
Steps for the configuration of K8s-WorkerNodes are as follows:-
cd files
gedit daemon.json
gedit k8s.conf
STEP 5 :- Put the K8s-Master and K8s-WorkerNodes Roles into one file name cluster.yml with the proper hosts name and then run the playbook.
gedit cluster.yml
In this file, the command for token create in master in which master provide private IP and port no. to join by worker nodes has been added.
Now run the playbook.
ansible-playbook cluster.yml
STEP 6 :- Check that whether configuration done or not manually.
kubectl get pods
kubectl get nodes
kubectl create deployment success --image=vimal13/apache-webserver-php
kubectl expose deployment success --port=80 --typr=NodePort
kubectl get pods -o wide
kubectl get svc
Now using public IP and exposed port no. of K8s-WorkerNodes we can see that pod has been launched inside one of the Worker Node.
STEP 7 :- Now put the roles in the Ansible Galaxy using GitHub Repository and for that create one directory.
In this task I have used my own created three ansible roles i.e. aws for provisioning 3 EC2 Instances, K8s-Master for configuration for Master and K8s-WorkerNodes for configuration of Worker Nodes to set the Multi Node Cluster over AWS.
Ansible Galaxy link:-
https://galaxy.ansible.com/prachikakanodia/aws
https://galaxy.ansible.com/prachikakanodia/k8s_master
https://galaxy.ansible.com/prachikakanodia/k8s_workernodes
And thus, all the objectives of task has been completed.
Thankyou for reading my article!! ??
?? KEEP LEARNING. KEEP EXPLORING. ??