Setting up the Kubernetes cluster on the top of AWS using EKS
- Initially, we will download the AWS CLI software (Link --> aws.amazon.com/cli/) (Shown below)
2. Then we will have to download the eksctl cmd that will be used to launch the cluster in a single click (Link--> https://github.com/weaveworks/eksctl) (Shown below)
3. Then after installation of AWS CLI (Step 1) we have to work on AWS using CMD LINE. So we used the command --> aws configure and to check the version we have used--> aws --version.
Now we have entered the Access key and Secret key that we have got from the credentials file that we downloaded on the creation of an account that was given Admin Rights.(Shown above)
Finally, we have also checked using aws eks list-clusters cmd that whether any cluster is launched till now. OBVIOUSLY NOT
4. So to create the cluster, we have to create the cluster.yml file shown below.
Something about eksctl cmd--> It is a cmd that has an internal file that consists of some functions and every function will do something for us (kind--> ClusterConfig). Note>> Here we have tell the requirement of worker/slave nodes but in case of FargateCluster it is not required.
5. Then to launch the cluster we have to use the cmd --> eksctl create cluster -f cluster.yml (o/p shown below).
We can see on the AWS page also that our cluster is getting created.(Shown below--> It is showing the status as CREATING)
When the cluster will be launched the status will be the ACTIVE
On the cmd line you will see this after the cluster is in ACTIVE state:
Now when we will check the list of clusters running, we will find this (shown below):
Or use this cmd:
6. Now we have updated the kubeconfig file for lwcluster. Now this config file is used by kubectl to get in contact with master and make a request to launch the pods on the worker/slave nodes.
Now we have used the cmd --> kubectl get nodes to get the details of all the nodes launched. Also, we can see that till now we have not launched any pods.
Now we have seen the details of one of the nodes(shown below(using describe nodes cmd))
Now we will create a namespace in which we will launch the pods (kubectl create namespace <name_of_ns_u_want>)
7. To make lwns (the namespace created by us) the default namespace we have used the cmd--> kubectl config set-context --current --namespace=lwns
Checking out the info. of the cluster that we have created(launched using EKS service of AWS)
Now we will launch the pods on the worker/slave nodes by making a request to the master and finally will expose those pods so that client can connect to the webpage launched on those pods using the image that we have created.(In my case, I have used the image created by our mentor--> vimal13/apache-weserver-php. For more details--> LINK--> https://hub.docker.com/r/vimal13/apache-webserver-php)
DIAGRAM WITH COMPLETE EXPLANATION
REMEMER > This master-slave K8S cluster setup is launched by us on AWS using EKS
8. So now we will launch the pod using the image that can help us get the webpage (just a sample) working to which clients can easily connect. So we did the same (can be seen in the img. below). Here EKS behind the scene will be using EC2 for launching the pods and this is called auto-provisioning and it is done by cloud-formation and stack is created behind the scene.
Also we have done the scaling and created 3 replicas (can be seen in the image above) so the check will be made that total 3 pods are launched or not.
9. Finally, we have exposed the pods using the cmd shown below in the image and to get the complete details of the pods used>> kubectl get pods -o wide.
Now we have used LoadBalancer as the service so that the outside clients can connect to the site and also load balancing can be done (shown below)
To see what will be the content that will be displayed to the client we have to move inside the pod and check for the webpage
Do--> exit (to come out of the pod)
10. Also, in the real-life, it is the requirement that we need to do modifications on the site running on the pod and for this we can copy the outside file to the pod's /var/www/html location.
11. Also now whatever content will be copied to this pod will get stored in the pod storage which is temporary or emepheral in nature so if our pod gets deleted because of some reason or if our node gets down then whole of the content copied will be erased and since the pod will be launched using the image so all the content of the image will come. So now we want that the content inside the space /var/www/html/ should not get erased and for this we want the storage to be persistent so for this we will create the PVC (Persistent Volume Claim) that will claim for the storage from PV which will take the space from the resources.This PVC will be mounted on the folder /var/www/html/ since we want it(folder) to be persistent.
We have created the pvc using the pvc.yml file shown below:
Here we can see that the status of pvc is pending. REASON>> It is because the storage class which is used by this pvc is gp2(by-default storage class) and it makes the PVC demand for the pod request for the it(PVC) behind the scene.
12. So now we have to make the pod request PVC for the storage since it(PVC) is waiting for it. So to make the pod request pvc, we have made the changes to the deploy in the following way. Use the cmd--> kubectl edit deply myweb (since the name of pod is myweb)
Finally, after making the request from the pod for PVC (after doing the editing), we can see the status of PVC as BOUND
Now if we want that pod should not be demanded to make the request for PVC then we can create our own storage class to which we will not give the annotations that are present in the default storage class.
The annotation that makes the pvc demand for the pod request is--> VolumeBindingMode which is set to WaitForFirstConsumer. Here the consumer is the pod. See in the image below that it is shown. And here this is the gp2 sc file that I was editing and don't be confused after seeing that "false". It was initially set to "true". OBVIOUSLY>> The default sc is gp2 only.
So now I created my own storage class (file is shown below). The type of it is given as io1 and not gp2 since it always depends on what kind of content we will put in our pods so if we want fast read write then as per that sc will be selected.
SEE--> Now we have created the pvc again that is using lwsc-retain storage class(created by us)
And we can see the status as BOUND and not PENDING since there are no annotations in our sc this time
Now we want it(lwsc-retain) to be the default storage class and also if we want that the pvc should use this sc then and should demand for the pod request then we can open the sc gp2 using >> kubectl describe sc gp2 and can copy the annotations & can paste these in the lwsc-retain after opening it using >> kubectl describe sc lwsc-retain.
I FORGOT TO TAKE THE SCREENSHOTS FOR THIS BUT IT IS AN EASY THING...YEAH!!
Finally, we can see the webpage coming on browser successfully...AND IT'S DONE :)
To delete the entire cluster --> Use the command eksctl delete cluster -f cluster.yml
Finally, delete the stacks from the AWS page. Checkout that no instances should run and EKS services are terminated else the charge will keep increasing...!!!!
DON'T DO THIS MISTAKE.......!!
I did the mistake of not checking everything so my bill reached from 119.52 INR to 418.96 INR since my stacks were not deleted and instances were running.
SUMMARIZED FORM OF >>What we did>> Launched the master-slave cluster setup on the top of AWS> Launched the pods by making request to the master> Finally exposed the pod so that clients can have an access to it> ALSO>> We created the pvc and mounted it on the folder/var/www/html of the pod so that if the new content is copied to the /var/www/html folder of the pod then it remains persistent.>> Also since the status of the PVC was coming pending so made the request from the pod and the status of PVC became BOUND. >>At the end created our own SC(storage class) and set it to default and removed gp2 from default and also there were no annotations in the new SC so automatically after the creation of PVC we got the status to be BOUND and not PENDING.
Graphic designer
4 年You have explained the concept with full clarity and is comprehensive and easy to understand .Nice job 5 stars from my side.
Building Auravita, Decube, Webcrux || Ex EY India
4 年Finally completed ??
Senior SRE @ Zscaler | Ex-Redhatter | RHCA XII
4 年Finally you made it pragati
Senior DevOps Engineer at Sedin | GitLab Certified Associate | Openshift Certified | Kubernetes | AWS | Terraform | Jenkins | Expert in Automating & Optimizing Software Delivery for Scalable Business Growth
4 年nice work Pragati agrawal
Senior Big Data Engineer | Scala | Spark | Pyspark | Hive | HDFS | HBase | SQL | Sqoop
4 年Nice! ??