Launching a Multi-Node Kubernetes Cluster using Ansible!
Apurv Waghmare
6k+ & Growing Linkedin Family|| DevOps Specialist at Amdocs || Docker || Kubernetes || 1X AWS || 2X Azure || Ansible || Terraform || Jenkins ||SAFe 6 certified
This blog will explain the process of launching a Multi-Node Kubernetes cluster using Ansible. The same Ansible code can be used to launch the same cluster on any platform whether it is cloud, bare-metal, etc., the only thing to change for launching the cluster on different platforms is the IP Address of the machines in the specified groups of Master & Worker nodes of Kubernetes.
If you have ever set up a Kubernetes cluster on your own, then you definitely know how painful the task it is, it consumes a lot of time & also it has multiple stages for a perfect configuration of the same.
In this Agile/Automation World, we have to use automation for the task, because it not only saves a lot of time, but it also reduces the chances of mistake/human error which can be occurred by humans. For Example, if there are 100 machines that are to be configured, then there is a very high chance for human error in the configuration process. Therefore, to eliminate the error, we have to adopt automation tools like Terraform, Ansible, etc., based on the requirement/use-case. Here in this case Ansible is used because it is a configuration management tool & we have the use-case associated with configuration management only.
The complete process of the Kubernetes Multi-Node cluster is explained below with all the steps required. Here, Kubeadm is used to configure the cluster, with a kubelet program present on all the nodes for the communication between the PODS, & kubectl for running the Kubernetes Command.
What is Terraform ? Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
So we have to launch ec2 instances on AWS. Here i have used Terraform to launch ec2 instances on AWS. I have launched 3 instances in that one instance for ansible, one for master node and rest two will be for slave node 1 and slave node 2.
provider "aws" {
region = "ap-south-1"
profile = "Apurv"
}
resource "aws_instance" "myin" {
ami = "ami-005956c5f0f757d37"
instance_type = "t2.micro"
security_groups = [ "launch-wizard-8" ]
key_name = "mykey4396"
tags = {
Name = "Master"
}
}
similar code use for slave1 and slave2
save terraform code with .tf extension #First initialize this code and then run terraform init -----> for initializing terraform apply -----> for running terraform code
After launching ec2 instances you need to set that instance with adding user name and password also you need to set hostname in all the instances.
# useradd ansible and set the password using passwd command.
Then you need to update user in /etc/sudoers in all the instances.
Now you have to enable the ssh permission inside /etc/ssh/sshd_config in all the instances.
Then restart the sshd services. if you dont start the sshd services then your hosts will not connect to ansible. we have to restart the sshd services in all the instances. service sshd restart hostnamectl set-hostname type_name_of_host -----> will set hostname useradd type_username ------> Adding User name passwd type_password ------> Adding password
Now we have to install python with that latest version.
yum insatll python3 #Install Python3 python3 --version #check python version pip3 --version #check pip version
As Python is install now we have to install ansible.
pip3 install ansible #install ansible ansible --version #check version
We have installed all the required software now lets go and do configuration in ansible. You have to set all the hosts in /etc/myhosts.txt file. you can give any name to that file then you need to update that location in /etc/ansible/ansible.cfg. ansible.cfg is the configuration file for ansible. The name should be same as ansible.cfg. Ansible will fetch the hosts from this location.
Generate ssh-keygen and copy the ssh-key to all the nodes.
ansible inventory\
If all the hosts IP with their username and password is correct the we are good to go further. We can check this via below command.
yes you can see all the hosts are pingable and now we are good to go. Now you have to make one file named daemon.json and this file we will be transferring/copying from ansible node to all desired master and slave nodes. this file contains a code to make systemd i.e system driver enable in docker.
# cat /etc/daemon.json
{ "exec-opts": ["native.cgroupdriver=systemd"] }
This Configuration will contain 3 files, one for the basic setup for Master & worker Nodes in the cluster, another one for initializing the Master Node, & the last one for running the PODS on the Worker Nodes.
Configuration of the Master & Worker Nodes
- Configuring the Yum Repository for EPEL Packages (Extra software for future use).
- Configuring the Yum Repository for Docker Repository.
- Configuring the Yum Repository for Kubernetes Repository.
- Installing Docker.
- Installing Kubernetes.
- Stopping Firewall daemon.
- Disabling SELinux.
- Starting the Docker services.
- Installing python3 for docker-py Module and Pip3.
- installing python3 for docker-py Module and Pip3.
- Changing the Cgroup of docker from cgroupfs to systemd.
- Disabling Swap permanently after reboot.
- Adding Master & slave IP’s in the /etc/hosts file.
- Installing IpRoute-TC Package.
- Reloading systemd driver.
- Restarting Docker services.
- Starting & Enabling Kubelet services.
- isabling the swap in the current scenario.
- Renaming the Kubernetes Master hostname to master.
now run the ansible playbbook main_playbook.yml
#ansible-playbook main_playbook.yml
After creation of K8S cluster when you will launch pods inside the k8S cluster then the ip of the pods will be in the rage of 10.10.1.0
After running above command from master node it will give three commands to make config file and a token that will helps us to join the slave node with the master one. Also we have to make a way so that pods inside k8s cluster will be connected to each other. So we have to run the flannel command.
Now we can use the token given by master node to join the slave node and make the 8S cluster.
After running the tokens in slave node then the slave will join the master node.
Now we have successfully Launched a Multi-Node Kubernetes Cluster using Ansible.
#Deploying Wordpress and MYSQL ,Grafana & Prometheus Over K8 Cluster.
check on which node graphna and prommo running then to access this use ip of node 01, node02 and expose port number.
Finally we have launched K8 cluster Using Ansible and also launched wordpress Application along with MYSQL database and also Prometheus and Grafana
https://github.com/apurvwagh/Ansible-k8-Cluster
Thanks for reading... :) ????