-----EKS Task-----

-----EKS Task-----

Firstly I want to tell you what I learn in these two days...

Here we go

Day-1 Amazon Elastic Kubernetes Services(EKS)

1. EKS is a managed service which provides the integration of Kubernetes with the other AWS public cloud services

2. To launch a container to deploy the app we need resources for which we have moved to public clouds

3. Use case of clustering is that it provides fault-tolerant or failover setup 

4. Kubernetes is the Controller Orchestration Engine used to manage the nodes

5. Kubernetes clustering Architecture was discussed

6. Kubernetes internal master programs such as (Apiserver, scheduler, Kube controller, etcd, kubelet) 

7. Eks provides how many worker nodes we want and what resources we want for each node

8. Eks has the capability to go to EC2 and launch an instance

9. aws eks commands are one way to launch Kube cluster using eks service

10. Downloading, installing, and setup of eksctl command which is specialized for eks services.

11. Node groups are used to launch nodes in a datacentre or availability zone of a datacentre

12. YAML script was written and explained for launching node groups

13. eksctl get cluster and create cluster command were used to launch a cluster 

14. Eks internally used cloud formation stack for automatic provisioning

15. How to attach a key to the instance was seen (using ssh under node groups in cluster.yml)

16. How to connect to the K8s cluster 

17. how to create a config file

18. aws eks update-kubeconfig command is used to create and update the config file

19. practical on creating a namespace and launching a pod in it

20. kubectl cluster-info is the command to check the cluster connectivity

21. kubectl config set-context --current --namespace command is used to modify the current configuration

22. How to create and scale a deployment practical was shown

23. Pod launches in isolation so we expose it to the outside world we use load balancer service

24. ELB service of AWS provides us with load balancing and also provide public Facing

25. ELB provides load balancing across the datacentres even.

26. Practical on to update webpages on a pod was performed

27. Challenge with the above practical was that the pod has ephemeral storage so on the deletion of the pod the update data gets deleted

28. Using PVC, PV and Storage class concepts we created persistent storage upon the creation of which the PV gets dynamically created and storage class is used to mount the permanent storage to the pod so the update data in the pod is also persistent

29. Importance of deployment was discussed which is that it gives the same info to the newly replaced pod

30. Practical on how to attach a different storage class of different type (let's say io1) can be attached to the pod so we created another PVC and SC

31. ReclaimPolicy concept of PV and SC were discussed and how and why to change it to retain was understood ( to use it or attach it to another pod in future) 

32. Practical on how to make other storage class as default.

Day-2 Amazon Elastic Kubernetes Services(EKS)

Spot Instances: Aws EC2 Spot Instances provides the advantage of using EC2 instances which is in unused capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices.

We can create spot Instances using a script in YAML format.


We have two ways to manage our k8s cluster :


* Managed by us: here we have own external k8s cluster, we can provide external load balancer providers, etc.


* Managed by Aws EKS: here we have a guarantee that aws guyz managed our whole cluster and EKS behind the scene integrate with some of the services of aws like EC2, EBS, ELB, EFS, Cloudwatch, etc. So we can say that EKS is internally linked or tightly coupled with these services.

Why we need EFS ..?


* If suddenly load increase, we have multiple OS running in parallel i.e horizontal scaling and all the OS is configured with the Apache webserver. Per OS we attach EBS volume, suppose you have to copy the same content from one OS to other in this case the same EBS volume cannot be connected to the other OS. If developers change any code in one OS, they have to copy that code in other OS also. If you want your code or data persistent EBS won't help you if sudden load increase because it only provides you file system.


* Then the role of EFS comes in play.. they give you one centralized storage known as NFS :


? NFS stands for the network file system, it is easily mounted to the multiple OS over the network and if developers change any code in this storage, all the OS can easily access the update code.


? NFS is a protocol name. It gives you a file system and creates an NFS server.


? NFS share files and objects over the network. Here you can easily edit the file.


Helm: in k8s we have a package manager or chart manager known as helm.


* Helm hub provide us Kubernetes ready apps.


* Client always use helm command to install apps or packages.


* Helm also has one server known as tiller ( server-side component of the helm ).


* For initializing the helm we have to use the " helm init " command.


* We can launch Jenkins, Prometheus, grafana in one single click using helm, and these tools internally connected to the Kubernetes cluster.


------------------So here we go with some practical stuff

 Firstly we’ve to create the user who can access the AWS from the cli , so that for this we have a service in IAM to create the user with the admin power. Because only admin have all the power to do anything in any services.

No alt text provided for this image

Now after onwards we have to configure AWS from cli to use it. to configure the AWS we use the command # aws configure & give the AWS Access Key ID & AWS Secret Access Key & name of the availability zone.

which was provided when you create a user like this (i erase it for my security )

No alt text provided for this image
No alt text provided for this image

Now we create a cluster. To create the cluster we write a YAML code for that, because of that Kubernetes supports the YMAL language to create any file. So we have written a YAML code to create the cluster in my case I am giving the file name cluster.yml.

No alt text provided for this image

To use the eks service we need an eksctl command so we install the eksctl In the pc & put this file in the same location where we fave minikube located because behind the seen Kubernetes use minikube.

No alt text provided for this image

e run the command to create the cluster using this YAML file by the command

 eksctl create cluster -f cluster.yml
No alt text provided for this image
No alt text provided for this image

set the environmental variable to easy access from anywhere and proper functioning

for varify run this command If it shows any version to you then it's working fine

No alt text provided for this image


Now we can use the # kubectl config view command to check all the configuration setup for the clusters.

Now we will update the kubeconfig file with the help of

# aws eks update-kubeconfig --name mycluster

so that we can run the further commands for the clusters.

Now we will create the namespace with the help of command

# kubectl create namespace eks-sid-ns
 



we will also set the context for the created namespace in the previous command with the help of command

# kubectl config set-context --current -- na# mespace=eks-sid-ns

No alt text provided for this image
No alt text provided for this image
# kubectl config view

# aws eks update-kubeconfig --name <cluster name>

# kubectl cluster-info



Now we create a deployment with this image

No alt text provided for this image
No alt text provided for this image
# kubectl create deployment <name> --image=vimal13/apache-webserver-php

# kubectl get pods

now we can check the description about the pods & deployments with the help of the commands

# kubectl get pods

# kubectl get deployment

The benefit of deployment is that whenever we create a pod simply with help of run command so by mistake our pods get deleted then it can not launch itself but in case of deployment replication controller launch the pods automatically. That’s why we use deployments.

Now we can check the ip of the different instances has been launched with the help of the cluster

 # kubectl get pods -o wide 

We can also increase the number of pods with the help of replicas. The command for that is

 # kubectl scale deployment myweb --replicas=3

Now we will expose our deployment so that if we have any webpage in our docker images so that we can show to anyone in the world. 

Basically we use the three types to expose our IP to the world, but now we are using the load balancer to expose our web page with the help of command

 # kubectl expose deployment myweb --type=LoadBalancer --port=80


No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Here we are using the load balancer to expose our web page or site to outside the world so we need an IP to use a website so for this, we use a DSN name as IP for the web page.

No alt text provided for this image
No alt text provided for this image

For every refresh, IP will change you see it in the above image this is because the load balancer is in working they shift the client to different pods to balance the load from the outside world

No alt text provided for this image
# kubectl describe service/myweb

# kubectl delete all --all

Now we delete all the pods and cluster so if you view your instances in AWS it is probably blank

No alt text provided for this image

When we delete anything from the command line also it deleted from the WebUI also without going there. It’s the benefits of the command line to use it. 

Now we will create another deployment also for the hosting index.php.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Now we create a PVC file in the yml extension, where we write the code in the YAML format. So it would help us to create PVC automatically with the help of the YAML code.

No alt text provided for this image

Now we will open the deployment file code with the help of an editor so that we can edit it & write the PVC name & mount path & claim the volume. After editing the file we save the file & see the changes in the file 

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

When we delete the PVC from the command line it will also delete the volume from the WebUI

Now we create a storage class with the help of YAML code & create it

#  kubectl create -f sc.yml


No alt text provided for this image

Now we set the annotation in our storage class for that purpose we have to edit the storage class file & copy the annotation part & paste it into our self-created storage class.

No alt text provided for this image
No alt text provided for this image

When we retain the reclaim name option then when we delete the pvc from the command line & then it will not delete our volume from the storage . it retain it & we can use this in future use also with other instances.

No alt text provided for this image


For more deep knowledge go to this

https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/

No alt text provided for this image

Now we launching the fargate profile

What is fargate profile?

-> The Fargate profile allows an administrator to declare which pods run on Fargate. This declaration is done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace and optional labels. You must define a namespace for every selector. The label field consists of multiple optional key-value pairs. Pods that match a selector (by matching a namespace for the selector and all of the labels specified in the selector) are scheduled on Fargate. If a namespace selector is defined without any labels, Amazon EKS will attempt to schedule all pods that run in that namespace onto Fargate using the profile. If a to-be-scheduled pod matches any of the selectors in the Fargate profile, then that pod is scheduled on Fargate.

If a pod matches multiple Fargate profiles, Amazon EKS picks one of the matches at random. In this case, you can specify which profile a pod should use by adding the following Kubernetes label to the pod specification:

 eks.amazonaws.com/fargate-profile: profile_name

Fargate profile components

The following components are contained in a Fargate profile.

{
    "fargateProfileName": "",
    "clusterName": "",
    "podExecutionRoleArn": "",
    "subnets": [
        ""

    ],

    "selectors": [
        {
            "namespace": "",
            "labels": {
                "KeyName": ""
            }
        }

    ],

    "clientRequestToken": "",

    "tags": {

        "KeyName": ""
    }
   
}
}

For more deep knowledge go to the documentation about fargate profile by Amazone (AWS)

https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html

run this command before going for fargate profile

aws eks update-kubeconfig --name <name of cluster>


eksctl get fargateprofile --cluster <name of cluster>

Now we are login to one of the clusters to see what will happen inside it

No alt text provided for this image
ssh -i <key-name>.pem -l ec2-user <ip> #login inside cluster

sudo su -root                          #go to root user

free -m                                #free memory/RAM

suystemctl status docker 
      
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Kubernetes documentation by AWS is above

Now we have to do

WordPress and MySQL pods on the top of AWS

For that, we have to launch the customization which links WordPress and MySQL

No alt text provided for this image

This is our customization file written in YAML formate with .yml extension

In the above customization file, our resources are form file mysql-deployment.yaml and wordpress-deployment.yaml

These are the two resource file

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
kubectl create -k .

Now with the help of above command, our secret, service, deployment, and persistence Volume has been created successfully

No alt text provided for this image

-------EFS

Amazon EFS provides file storage in the AWS Cloud. With Amazon EFS, you can create a file system, mount the file system on an Amazon EC2 instance, and then read and write data to and from your file system. You can mount an Amazon EFS file system in your VPC, through the Network File System versions 4.0 and 4.1 (NFSv4) protocol. We recommend using a current-generation Linux NFSv4.1 client, such as those found in the latest Amazon Linux, Redhat, and Ubuntu AMIs, in conjunction with the Amazon EFS Mount Helper. For instructions, see Using the amazon-efs-utils Tools.

To access your Amazon EFS file system in a VPC, you create one or more mount targets in the VPC. A mount target provides an IP address for an NFSv4 endpoint at which you can mount an Amazon EFS file system. You mount your file system using its Domain Name Service (DNS) name, which resolves to the IP address of the EFS mount target in the same Availability Zone as your EC2 instance. You can create one mount target in each Availability Zone in an AWS Region. If there are multiple subnets in an Availability Zone in your VPC, you create a mount target in one of the subnets. Then all EC2 instances in that Availability Zone share that mount target.

No alt text provided for this image
No alt text provided for this image

-----HELM

Helm helps you manage Kubernetes applications 

Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
helm init    # for start helm 

helm repo add stable https://kubernetes-charts.storage.googleapis.com/


helm repo list

helm repo update


Now configure Prometheus and grafana for monitoring and visualization tools

create an environment for Prometheus

kubectl create namespace prometheus


for install Prometheus on the top of eks

helm install  stable/prometheus     --namespace prometheus     --set alertmanager.persistentVolume.storageClass="gp2"     --set server.persistentVolume.storageClass="gp2"


and for configuration

kubectl get svc -n prometheus

kubectl -n prometheus  port-forward svc/flailing-buffalo-prometheus-server  8888:80


for installing grafana on the top of eks

kubectl create namespace grafana


and configuration

helm install stable/grafana  --namespace grafana     --set persistence.storageClassName="gp2" --set adminPassword='GrafanaAdm!n'    --set datasources."datasources\.yaml".apiVersion=1     --set datasources."datasources\.yaml".datasources[0].name=Prometheus   --set datasources."datasources\.yaml".datasources[0].type=prometheus    --set datasources."datasources\.yaml".datasources[0].url=https://prometheus-server.prometheus.svc.cluster.local   --set datasources."datasources\.yaml".datasources[0].access=proxy     --set datasources."datasources\.yaml".datasources[0].isDefault=true  --set service.type=LoadBalancer







kubectl get  secret  worn-bronco-grafana   --namespace  grafana  -o yaml





No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

All the code that was used by me are easily available on my GitHub link just go throw it

https://github.com/githubvillain/eks-task



Thanks for reading this. If you have any doubt feel free to ask














Aaditya Tiwari

DevOps Engineer @Amdocs

4 年

Great work bro ????

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了