Kubernetes Multi-Node Cluster Setup on AWS with Ansible
Hello Guys ,
In this article we will setup Kubernetes Cluster on Amazon Web Services (AWS) with the help of Ansible . First we will introduce you with Kubernetes , AWS and Ansible .
Introduction to Kubernetes (k8s) -
- Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
- Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Kubernetes provides Service discovery and load balancing , Storage orchestration , Automated rollouts and rollbacks , Automatic bin packing , Self-healing , Secret and configuration management .
Introduction to Amazon Web Service (AWS) -
- Amazon Web Services (AWS) is a subsidiary of Amazon providing on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis.
- These cloud computing web services provide a variety of basic abstract technical infrastructure and distributed computing building blocks and tools.
- One of these services is Amazon Elastic Compute Cloud (EC2), which allows users to have at their disposal a virtual cluster of computers , available all the time, through the Internet. AWS's version of virtual computers emulates most of the attributes of a real computer, including hardware central processing units (CPUs) and graphics processing units (GPUs) for processing; local/RAM memory; hard-disk/SSD Storage; a choice of operating systems; networking; and pre-loaded application software such as web servers, databases , and customer relationship management (CRM).
Introduction to Ansible -
- Ansible is a universal language, unraveling the mystery of how work gets done. Turn tough tasks into repeatable playbooks. Roll out enterprise-wide protocols with the push of a button.
- Ansible is a radically simple IT automation engine that automates cloud provisioning , configuration management, application deployment , intra-service orchestration, and many other IT needs.
Setup Kubernetes Cluster on AWS -
- For this practical I have a Virtual Machine ( RedHat 8 is installed )On the top of VirtualBox where ansible is already installed. and my workspace is for this practical is "/Cluster_Management/kubernetes"
# ansible --version
Launch Three Instance on AWS -
- one of them will be Kubernetes Master and other Two will be Kubernetes Slave .
- In Kubernetes Cluster Master node managed all Slave Nodes , Master Nodes takes request form client for using Cluster . & Slave node provides resources like RAM and CPU etc to cluster .
- For Launching Instance on AWS , AWS provides service , is called Amazon Elastic Cloud Compute (EC2) . In ansible to launch EC2 Instances it has module 'amazon.aws.ec2_instance' .
- I will tag Master Node with (key - kubernetes , value - master) & Slave Node with ( key - kubernetes , value - slave) because further when ansible fetch instance ip it will use this tag in playbook .
- For security AWS use 'Security Group' to create firewall rules . In ansible to create security group we have 'amazon.aws.ec2_group' module. In my case I will allow all ports and IP for this practical.
- I have created a playbook "aws_ec2.yml" for creating security group and launching ec2 instance ( I will create this infrastructure inside "ap-south-1" region )
- "aws_cred.yml" is a vault file which have my aws_secret_access_key and aws_access_key_id variables . This file have a id "aws_keys" which I given at creation time of this vault .
# cat aws_cred.yml
- Run "aws_ec2.yml" playbook to launch EC2 Instance -
# ansible-playbook --vault-id aws_keys@prompt aws_ec2.yml
Directory Structure For AWS -
- I created a directory structure , so that I can put aws related files , which 'ansible.cfg' for aws , aws key 'hello.pem' and Inventory file. This directory will help us to fetch ip from aws and make a static inventory with particular group like "kubernetes_master" , "kubernetes_slave" , "kubernetes_cluster" .
- This directory will help us when we have multiple platform like gcp , openstack , etc. According to requirement we can use that directory .
# ls -R aws/
- In "aws/key" directory we will put our key files , In my case I have attached key "hello" to my EC2 instances . and provides this file some permission so that user of OS (like root )read this file and ansible can use this file -
# chmod 400 aws/key/hello.pem
- In "aws/inventory/dynamic" directory file I have dynamic inventory "ec2.py" which will fetch ec2 instance ip for aws . and change the executable python interpreter version to 3
- Make executable "ec2.py" file -
# chmod +x aws/inventory/dynamic/ec2.py
- In "aws/inventory/static" directory I have a jinja template file "hosts" for static inventory - ("kubernetes_master" group will have public ips of master node , "kubernetes_slave" group will have public ips of slave node , "private_kubernetes_master" group will have private ips of master node , "private_kubernetes_slave" group will have private ips of slave node)
- Here we created private ip group beacuse we will use these private ips in task condition inside ansible role -
- ansible configuration "aws/ansible.cfg" file for aws -
Fetch Ips and Create Static Inventory -
- Copy "aws/ansible.cfg" in current workspace where we are working or we can use ansible for this . I created a playbook "ansible_conf_setup.yml"
- Right Now I don't have any ansible configuration file in workspace -
- Now run this playbook -
# ansible-playbook ansible_conf_setup.yml
- For fetch EC2 instance IP and I will created another playbook "static_inventory.yml"
- Now export AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY , AWS_REGION so that dynamic inventory can go to aws and fetch ips . I have already exported these variables .
- Whenever we run "static_invetory.yml" file it will fetch EC2 instance IPs and make static inventory "hosts" in current workspace , and it will also change inventory path in "ansible.cfg" so that when we run any playbook it will not again fetch instance IPs -
# ansible-playbook static_inventory.yml
PlayBook For Setup Kubernetes Cluster -
- I have already created a role "kubernetes" for setup kubernetes cluster . For using this role I created a playbook "test-kubernetes.yml" .
- In this playbook 2nd play will run on Localhost because In first play role will copy some file "admin.conf" (This file is for make client for kubernetes cluster) and "token.json" (This file will have token info) . 2nd play will remove these files
- Right Now Ansible Role "kubernetes" role only support "docker" as container runtime engine and "flannel" as container network interface . This role will work on only RedHat and Amazon Linux .
- For this role we have to give "init_info" variable for kubernetes cluster initialization .
- In "test-kubernetes.yml" playbook I give ignore_preflight_errors because we use "t2.micro" instance on aws and this instance have only 1 CPU and 1GiB RAM but for kubernetes cluster we requires 2CPU and 2Gib RAM , but we want it run on 1 CPU and 1GiB RAM then we have to ignore these error in preflight .
- Run this playbook -
# ansible-playbook test-kubernetes.yml
Check Cluster is Successfully setup or not -
Now Our Kubernetes Cluster is Successfully Setup On AWS Cloud