Creating Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud.

Creating Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud.

What is Ansible ?

No alt text provided for this image

Ansible is a software tool that provides simple but powerful automation for cross-platform computer support. It is primarily intended for IT professionals, who use it for application deployment, updates on workstations and servers, cloud provisioning, configuration management, intra-service orchestration, and nearly anything a systems administrator does on a weekly or daily basis. Ansible doesn’t depend on agent software and has no additional security infrastructure, so it’s easy to deploy

What is Kubernetes ?

No alt text provided for this image

Kubernetes (also known as k8s or “Kube”) is an open-source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

What is K8s Cluster ?

No alt text provided for this image

A Kubernetes cluster is a set of nodes that run containerized applications. It allows containers to run across multiple machines and environments: virtual, physical, cloud-based, and on-premises. Kubernetes containers are not restricted to a specific operating system, unlike virtual machines. Instead, they can share operating systems and run anywhere.

There are two kinds of Nodes:

  • Master Node: Hosts the “Control Plane” i.e. it’s the control center that manages the deployed resources. Some of its components are kube-apiserver, kube-scheduler, kube-controller-manager, kubelet.
  • Worker Nodes: Machines where the actual Containers are running on. Some of the active processes are kubelet service, container runtime (like Docker), kube-proxy service.

Task Description ?? :-

?? Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud.

?? Create Ansible Playbook to launch 3 AWS EC2 Instance

?? Create Ansible Playbook to configure Docker over those instances.

?? Create Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.

In this Task, we have one controller node on which ansible is installed and three target node for configuring Multi-Node Kubernetes cluster.

Pre-requisites :-

  • IAM User with all EC2 power and privileges.
  • Ansible installed on Controller Node.

Note:- I installed ansible on my local system which is RHEL-8.

Steps to Configure Multi-Node Cluster on AWS Cloud :-

Step - 1:- Installing required packages and libraries in controller node ->

yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y
yum repolist

yum install python3 ansible -y
pip3 install boto boto3 -y

In my case I already downloaded and installed the packages as you can see in below image.

No alt text provided for this image

Step - 2:- Creating Dynamic inventory of Ansible for communicating with AWS.

No alt text provided for this image
  • Change directory to "/etc/ansible/" and execute following commands:-
mkdir inventory 
mv hosts inventory 
cd inventory

wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.py


wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.ini

No alt text provided for this image

Above line create a directory called "inventory" in location "/etc/ansible/" and move "hosts" file to the "/etc/ansible/inventory/" location. Then, it download "ec2.py" as well as "ec2.ini" file in "/etc/ansible/inventory/" location.

  • As soon as files get downloaded, use following command to make "ec2.py" file executable.
chmod +x ec2.py


No alt text provided for this image

Then update "ec2.py" file such that change "#!/usr/bin/env python" to "#!/usr/bin/python3" in it.


No alt text provided for this image

Update "ec2.ini" file such that change "regions= all" to "regions= (Instance_in_which_You_want_to_create_cluster)". I had changed to "regions= ap-south-1" in it. So that execution of ansible playbook become faster.

No alt text provided for this image
  • Now use following command to provide IAM user credential to the Ansible and also add them in "/root/.bashrc" file.



export AWS_ACCESS_KEY_ID='AKI**********************'
export AWS_SECRET_ACCESS_KEY='6*********************************'

  • After this make changes in your ansible.cfg file which will be most-likely in "/etc/ansible/ansible.cfg" location. The main changes that you have to take care of would be:
[default]
inventory       = /etc/ansible/inventory
remote_user=ec2-user
roles_path     = /path_of_your_roles_folder
host_key_checking = False
private_key_file = /path_of_your_.pem_file
deprecation_warnings= False
command_warnings= False

[privilege_escalation]

become_method= sudo
become= True
become= user
become_ask_pass= False
  • Check connectivity of Ansible with AWS by following command :-
ansible all -m ping

No alt text provided for this image
Dynamic Inventory is configured.

Note:- while pinging make sure that an instance is running on AWS to check connectivity.

Step - 3 :- Creating Ansible Roles for Multi-Node Cluster ->

  1. Creating role for launching instance on AWS cloud.
  • Use command given below in your role workspace (I am in my role workspace called "Arth-Task-19) to create role for launching AWS instance.:-
ansible-galaxy init role_name

No alt text provided for this image
  • Now edit "instance_launch/tasks/main.yml" file. (I used vim editor or command)
vim instance_launch/tasks/main.yml

Use "ec2" module to launch instance on AWS, for which you have to provide the info of key_name, instance_type, image, instance_tags, loop, region, group_id, state, wait, count.

Note: While providing the key name do not mention the extension e.g .pem or .ppk.

No alt text provided for this image

The instance_tags play a key point in the whole procedures as these are the ones we’ll use to specify on which instance the configuration have to take place on. We used mainly two tags: k8s_master( for specifying the master node specific configuration) and k8s_slave(for specifying the slave node specific configuration). These tags will come in use in the main.yml file which make use of the all roles we’ll create here.

A problem that you might face while doing any configuration on the recently launched instances will be that ansible will pick the things from the inventory data which was there in the beginning of playbook execution. So according to that there were no aws instances. So your main.yml file which is supposed to install docker( the next step ) will fail. To solve that problem we have to write further more line in "instance_launch/tasks/main.yml" file, So that ansible wait for sometime while meta module refresh its inventory information.

No alt text provided for this image
  • After that edit file "instance_launch/vars/main.yml" file and provide required information by writing following line in it.
No alt text provided for this image
  • Now create and edit setup.yml file and write following line in your role workspace, So that it can use instance_launch role for launching instance on AWS.
vim setup.yml


  • Let's check whether our configuration done till now is working or not. By Following commands:-
ansible-playbook setup.yml


No alt text provided for this image
  1. Creating role for configuring Master-Node on AWS:-
  • Use command given below, in your role workspace, to create role for configuring Master-Node:-
ansible-galaxy init k8s_master

No alt text provided for this image
  • Now edit "k8s_master/tasks/main.yml" file and write following lines in it from your workspace :-
vim k8s_master/tasks/main.yml

The configuration of kubernetes have a lot of steps if you’re doing it all manually. In the ansible role we’ll have to use many modules like yum, command, copy, file, blockinfile etc.

No alt text provided for this image

At some places the normal command module won’t work. At that place we’ll use the raw module as it works at some different level with the shell of the host.

No alt text provided for this image

Remember that some of the configurations like running init command might give error when run second time due to error in some other part of the code. So keep using ignore_errors at places where required.

After you create the token it will be a big challenge for you to put it in a variable and use other hosts block if you try to use the basic modules. So, for that we make use of the dummy variable. We’ll first put the token inside a register and send it to dummy variable which will keep the value in it for the entire code.

No alt text provided for this image
  • After that edit file "k8s_master/vars/main.yml" file and provide required information by writing following line in it.
vim k8s_master/vars/main.yml

No alt text provided for this image
  • Now update setup.yml file in your role workspace and add following lines in it.
No alt text provided for this image
  1. Creating role for configuring Slave-Node on AWS:-
  • Use command given below, in your role workspace, to create role for configuring Slave-Node on AWS:-
ansible-galaxy init k8s_slave

No alt text provided for this image
  • Now edit "k8s_slave/tasks/main.yml" file and write following lines in it from your workspace :-
vim k8s_slave/tasks/main.yml

No alt text provided for this image
No alt text provided for this image
  • After that edit file "k8s_slave/vars/main.yml" file and provide required information by writing following line in it.
vim k8s_master/vars/main.yml

No alt text provided for this image
  • Now update setup.yml file in your role workspace and add following lines in it.
No alt text provided for this image
  • Let's check whether our configuration done till now is working or not. By using following commands:-
ansible-playbook setup.yml
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
Roles for K8s Cluster is created and configured

Step - 4 :- Checking the whether K8s Cluster is setup or not :-

  • Execute following command in master node by logging into AWS then EC2 instance.
kubectl get nodes

No alt text provided for this image
Kubernetes Multi Node Cluster Successfully Configured !

Note:- sometimes it take time for slave node to join master node for which you can execute "sysctl --system" in slave node.

It's working fine.

That's all, The Task is done.

Thanks for reading. . . . . .

No alt text provided for this image

Ansible Galaxy Link -

GitHub Repoitory Link -

Linkedin Profile -


要查看或添加评论,请登录

Mohit Singh Tomar的更多文章

社区洞察

其他会员也浏览了