Creating a K8s cluster with | Ansible on provisioned AWS instances | Dynamic Inventory

Creating a K8s cluster with | Ansible on provisioned AWS instances | Dynamic Inventory


Amazon Web Service :

We can define AWS (Amazon Web Services) as a secured cloud services platform that offers compute power, database storage, content delivery, and various other functionalities. To be more specific, it is a large bundle of cloud-based services.

Kubernetes:

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.

Ansible :

Ansible is the simplest way to automate apps and IT infrastructure. Application Deployment + Configuration Management + Continuous Delivery.

----------------------------------------------- START-----------------------------------------------------

In this article, I am going to configure a K8s Cluster on AWS, Ec-2 instances with a tool for automation called ansible. Also with a dynamic inventory.

Why dynamic inventory?

For example, Aws default provides a dynamic IP so, after every restart, it's assigned to a new IP, or we launch a new O.S in a region called ap-south-1 and we need to use all the container as the database from that region and suppose there are 100 servers, Instead of putting it manually in the inventory there is a python script that works as a dynamic inventory which divides the instances with tags, region, subnets, etc.

First, let's start with provisioning of Ec-2 instances with ansible

Creating an Ansible role called provision-ec2

No alt text provided for this image

In roles, Creating a var file provision-ec2/vars/main.yml

No alt text provided for this image

In the task/main.yml we are going to launch an ec2 instance using the amazon ami image one Master and two slaves.

No alt text provided for this image

Here, the most important are tags i.e k8s_master and k8s_slave because we are gonna use them in the dynamic inventory to separate them according to their uses.

Here the access and secret key are pre-generated by me you can create your own as per your need, they are assigned to an IAM user.

No alt text provided for this image

Instead of providing the aws_access_key and aws_secret_key in the playbook, we can configure the AWS CLI and the Ansible will by default use the credentials which we provide while configuring the AWS CLI.

And also used the security group which was created with all inbound and outbound traffic named k8s cluster. It's not good for security but set for doing this project.

No alt text provided for this image

According to the role, all the tasks are successful

No alt text provided for this image
No alt text provided for this image

Now the one instance for master and two for slaves are ready to configure for the k8s cluster.

Now let's move to the dynamic inventory part

The dynamic inventory will separate the instances according to region, tags, public and private IPs, and many more.

Ansible provides the two scripts for the dynamic inventory ec2.py and ec2.ini

https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.py

https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.ini


Pre-requisites for these scripts are installing boto and boto3 in the system where you are running the program.

pip install boto
pip install boto3

To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are a variety of methods available, but the simplest is just to export two environment variables:

export AWS_ACCESS_KEY_ID='your access key'
export AWS_SECRET_ACCESS_KEY='your secret key'

or The second option is to copy the script to /etc/ansible/hosts and chmod +x it. You will also need to copy the ec2.ini file to /etc/ansible/ec2.ini. Then you can run ansible as you would normally.

No alt text provided for this image

We have to only run ec2.py for getting the dynamic inventory.

The script separated the instances according to tags "tag_db_k8s_master" and "tag_db_k8s_slave" and made them a separated host group so we can use them in the playbook.

Our dynamic inventory is ready to use.

Let's, Create a role for configuring the k8s master.

ansible-galaxy init k8s-cluster

In the k8s-cluster/vars/main.yml file

No alt text provided for this image

Steps in the below playbook /k8s-cluster/tasks/main.yml

1.Installing docker and iproute-tc

2.Configuring the Yum repo for kubernetes

3.Installing kubeadm,kubelet kubectl program

4.Enabling the docker and Kubernetes

5.Pulling the config images

6.Confuring the docker daemon.json file

7.Restarting the docker service

8.Configuring the Ip tables and refreshing sysctl

9.Starting kubeadm service

10.Creating .kube Directory

11.Copying file config file

12.Installing Addons e.g flannel

13.Creating the token

No alt text provided for this image
No alt text provided for this image

The role was run successfully

No alt text provided for this image
No alt text provided for this image

The file created for running the role(k8s-cluster)

No alt text provided for this image

Let's, Configure the slave on the other two ec-2 instances.

  • Create a role k8s-slave
ansible-galaxy init k8s-slave

In /k8s-slave/vars/main.yml

No alt text provided for this image

Steps in the below playbook /k8s-slave/task/main.yml

1.Installing docker and iproute-tc

2.Configuring the Yum repo for kubernetes

3.Installing kubeadm,kubelet kubectl program

4.Enabling the docker and Kubernetes

5.Pulling the config images

6.Confuring the docker daemon.json file

7.Restarting the docker service

8.Configuring the Ip tables and refreshing sysctl

No alt text provided for this image

The role was run successfully.

No alt text provided for this image
No alt text provided for this image

The file created for running the role(k8s-slave)

No alt text provided for this image

As in the above-created token join the slaves

No alt text provided for this image
No alt text provided for this image

After running "kubectl get nodes" in the Master.

No alt text provided for this image

Let's launch a pod on this cluster

kubectl create deployment success --image=vimal13/apache-webserver-php
No alt text provided for this image
#exposing the pod
kubectl expose deployments success --type=NodePort --port=80

No alt text provided for this image
kubectl get pods -o wide


-----------------------------This pod is launched in the first slave.-------------------------------------

No alt text provided for this image

GitHub repo: https://bit.ly/3aQDh3O

The code will be more optimized in some days

Thank You !!!??

Keep Learning !! Keep Sharing!!!??

If stuck somewhere message me on LinkedIn.

Successfully created a K8s cluster with ansible on Provisened AWS instances with a dynamic inventory.









 

Deepak Shah

Terraform || Openshift(EX180) || AWS(CLF-CO2) || AWS(SAA-C03) Certified

4 å¹´

great work Suyog Shinde ????

赞
回复
Vinodha Kumara L

Sr DevOps at barq | 2xAWS Certified | 3xAzure Certified | MLOps | Terraform | Ansible | Jenkins | ArgoCD | CloudComputing | K8s | Blogger

4 å¹´

Good work Suyog Shinde

赞
回复
Onkar Naik

DevOps @Forescout ?? | Google Developer Expert | AWS | DevOps | 3X GCP | 1X Azure | 1X Terraform | Ansible | Kubernetes | SRE | Platform | Jenkins | Tech Blogger ??

4 å¹´

Good going Suyog Shinde

赞
回复
Pritee Dharme

ARTH - School Of Technologies || Python || Amazon Web Service || Big Data || C || C ++ || DevOps (Kubernetes , Ansible , Docker , Git and GitHub , Jenkins )

4 å¹´

Good one Suyog Shinde ??

要查看或添加评论,请登录

Suyog Shinde的更多文章

社区洞察

其他会员也浏览了