Configuration of Kubernetes Multi-node Cluster through automation by Ansible over the AWS ec2 Instances
Gautam Khatri
Freelancer | DevOps Engineer | RedHat Certified Engineer(RHCE)| ARTH Learner | Flutter | OpenShift | Terraform | Expertise in Docker | Kubernetes| Python | C++ | MLOPS | Hadoop | AWS |Shell Scripting
Task Description
?? Ansible Role to Configure K8S Multi-Node Cluster over AWS Cloud.
?? Create Ansible Playbook to launch 3 AWS EC2 Instance
?? Create Ansible Playbook to configure Docker over those instances.
?? Create Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.
?? Convert Playbook into roles and Upload that role on your Ansible Galaxy.
?? Also, Upload all the YAML code over your GitHub Repository.
?Firstly we will create the playbooks and convert that playbook into the roles.
Let's Begin ??
?About Ansible Roles:-
1.In Ansible, the role is the primary mechanism for breaking a playbook into multiple files. This simplifies writing complex playbooks, and it makes them easier to reuse. The breaking of the playbook allows you to logically break the playbook into reusable components.
2. Each role is basically limited to a particular functionality or desired output, with all the necessary steps to provide that result either within that role itself or in other roles listed as dependencies.
3. Roles are not playbooks. Roles are small functionality that can be independently used but have to be used within playbooks. There is no way to directly execute a role. Roles have no explicit setting for which host the role will apply to.
Pre-requestic ??
Boto3 is the name of the Python SDK for AWS. It allows you to directly create, update, and delete AWS resources from your Python scripts.
pip3 install boto3
??Creating the role for launching the ec2 instances.
This role will launch 1 master node and 2 worker nodes.
Here we can see the empty ec2_launcher role is created. Now from the playbook, we copy our tasks into the main.yml file of the tasks directory.
Now we see the variables used in the ec2_launcher role
??Let's now create the Kubernetes master node role.
Now have a look at the tasks file of the master-setup role
STEPS??
- Installing the docker software
- Enable the service of docker
- Setup the yum repository for the Kubernetes
- Installing the kubectl, kubeadm and kubelet.
5.Pulling the Images for master setup.
6. Checking and Changing the Cgroup driver to systemd
7. Restarting docker service
8.Installing iproute-tc package.
9. Configuring the bridging.
10. Running sysctl command
11. Creating .kube directory and copying admin.conf to it.
12. Changing the permission of .kube/config.
13. Clearing the caches to increase the RAM
14. Starting coredns pods using the flannel.
15. Creating Token for join and storing it in the controller node so that we can transfer that token in the worker node for join the cluster
Now let's see the variables used in the role kube-master-setup.
CIDR - It is used to give the range of IP address to the pods. So that each pod have the unique IP Address.
loc_dir - This variable is used to specify the location of the workspace or directory where your role is stored.
Files in the role kube-master-setup
It is used to change the Cgroup driver to systemd
??Worker-node setup in the kubernetes Cluster using ansible
Let's create the role for worker node setup
Tasks file of the kube-worker-setup role
Variables of the role kube-worker-setup
location_join - This variable is used to provide the location where you want to store the joining command file, which will be run later on to join the cluster from worker.
Files of the role kube-worker-setup
??Some of the extra configuration required to make the code Dynamic
?Credentials should be accurate to run the playbook smoothly.
RUN THESE COMMANDS:
- export AWS_ACCESS_KEY_ID="Place your Access key here"
- export AWS_SECRET_ACCESS_KEY="Place your secret key here"
?For setup of dynamically retrieving IP's of EC2
Download the ec2.py and ec2.ini file from below link??
- https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.py
- https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.ini
Comment this line in ec2.py like this?? if it shows error related to ec2:
#from ansible.module_utils import ec2 as ec2_utils
Running the playbooks??
- ec2.yml - For launching the 1 master node and 2 worker node.
- kube-cluster.yml - Combine playbook for the master setup node and worker setup node.
Starting PLAY for the setup of worker node
Above we can see all the playbooks runs successfully.?
Let's check whether the configuration of master and worker node done or not.??
All the pods are running
CLuster is successfully created and all the nodes are in Ready state.
We have configured the kubernetes multi node cluster using ansible automation.???
Github Links:
Install and use the roles directly from ansible-galaxy??
ansible-galaxy install gautam43.ec2launcher
ansible-galaxy install gautam43.kube_master_setup
ansible-galaxy install gautam43.kube_worker_setup
Student at Subodh autonomous college, jaipur
3 年Awesome ??
Learner ? DevSecOps @Isha Foundation ? AWS Community Builder ? CKA, AWS & RedHat Certified
3 年Wonderfully put together this high-end integration in this awesome article ??
?? ATSE at Red Hat ?? | ??TCW Intern at GeeksForGeeks || ??Aspiring Cloud Engineer || ??? HCIA-AI Certified || ??? Nptel Certified in Cloud Computing || ??? Core Team Member at GDSC - MIET
3 年Great bro you really have put a lot of effort in creating this article. Keep it up
Student at Swami Keshvanand Institute of Technology, Management & Gramothan (SKIT)
3 年Amazing