Amazon Kubernetes as a Service
Kritik Sachdeva
Technical Support Professional at IBM | RHCA-XII | Openshift | Ceph |Satellite| 3Scale | Gluster | Ansible | Red Hatter
Why we need Aws to run our Kubernetes? What is Kubernetes? and so on...
In this post, I will cover some of these questions and finally, show a small project on how can we can make use of AWS EKS service to deploy our application.
So let's dive into the answers quickly...
What is Kubernetes?
Kubernetes is a platform that provides us the feature of orchestrating the container running on top of docker engines, cri-o, or podman. The latest version of Kubernetes makes use of cri-o to launch containers.
It is a management platform where the intercommunication and creating the best practices to make our application highly available along with other features too.
To setup or install Kubernetes Cluster there are two ways: a) Either launch it on-premise b) or Launch in on the cloud i.e Public Cloud. All public clouds provide this service, where we can launch the Kubernetes cluster. It includes GCP, Azure, or AWS.
(In this post I will cover the same for AWS cloud, and the service on AWS that provides this is known as EKS Elastic Kubernetes as a Service)
So, Why AWS? and What's difference does it provides?
-To understand this, we must know or have a glimpse of how the Kubernetes works.
Kubernetes cluster's main element which is responsible for deploying and managing the applications is Master Node.
The master node is composed of several sub-elements as API server, Kube Scheduler, ETCD database, and Kube Controller. As the name describes their purposes as
- The API server is responsible for taking the requests from the client
- The scheduler is responsible for scheduling that is where to launch the application among the worker nodes ( worker nodes are called as Node groups in EKS )
- The controller is the one who requests the respective worker node to launch the application
- The ETCD database is used for storing the data for the intercommunication of container, storing the IP, certificates, etc
As it is the main entry point for the System Administrators or developer to connect to and deploy the application, hence it needs to be highly available along with the best practices for the networking, devices, storage, or other resources.
The difference AWS EKS service provides is,
To implement these best practices or we can say management of the master node is been provided by the EKS service of AWS. EKS takes the complete responsibility of making the master node fully active and working, plus it provides an additional benefit to integrate the other services on AWS with it like Disaster Recovery to make the service highly available.
Now see how can we create or implement the Kubernetes cluster on AWS using EKS?
To connect to AWS, there are 3 ways: a) WebUI b) Command Line (CLI) c) SDK. The best way to do it through the use of the command line, as it provides us more flexibility to implement. Now, the pre-requisite for implementation through cli are:
- Installed with aws-cli software
- Kubernetes client utility (to connect to Kubernetes)
But, here is an issue with the AWS-cli software that it won't provide us with the flexibility in creating the Cluster. Like how many or what kind of resources we need or require for our application ( I'm talking about the worker nodes ). So to make the customization in the cloud services AWS makes use of the CloudFormation tool to create the infrastructure on the cloud.
And to make use of it, there is an another tool/utility made by an Open source community weaver called eksctl. Eksctl utility uses a domain-specific language to launch the cluster that is YAML.
For Project, our steps would be:
- Create a Kubernetes Master on AWS EKS
- Integrate EKS with other services like EC2, EBS, VPC, ELB, etc
- Deploy the WordPress application using Helm
- Deploy Grafana and Prometheus as a monitoring tool using the Helm
1) Creating a Kubernetes cluster using eksctl in YAML
To create the K8s cluster there are two ways:
a) First, only the Master node is being managed by the AWS and we manually specify the resources for the Worker Nodes and scaling them up manually based on the stats and requirement
b) The complete cluster from master to worker nodes is completely managed by the AWS itself from scaling to managing the master node ( Simply the complete infrastructure of Kubernetes )
A) The first way to launch the K8s Cluster using EKS service
Create a file naming, cluster.yaml, and specify which instances you want to use. And further, it will automatically integrate EC2, VPC, etc.
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: eks-01 // Name of the cluster region: ap-south-1 // AWS region where to launch the K8s master nodeGroups: - name: Group1 desiredCapacity: 2 instanceType: t2.micro ssh: // Specifying ssh key, used to login to the instance ( if needed ) publicKeyName: openstack - name: Mixed-group1 // Making the Node Group more dynamic minSize: 1 maxSize: 2 instanceDistribution: maxPrice: 0.1 instanceTypes: ["t2.micro"] // List of Intance types that you want to be used onDemandBaseCapacity: 0 onDemandPercentageAboveBaseCapacity: 50 spotInstancePools: 2 ssh:
publicKeyName: openstack
To create the cluster use command,
eksctl create cluster -f cluster.yaml
( In this output of this command we can see, that it uses CloudFormation to create the infrastructure for Kubernetes cluster )
To verify the cluster is being created, we have two ways:
a) aws eks list-clusters
b) eksctl get cluster
To see the node groups that are created or associated with the cluster we have created, again there are two ways:
a) aws eks list-nodegroups --cluster-name <name of cluster>( Here it is eks-01 )
b) eksctl get nodegroups cluster <name of cluster>
B) The second way to launch the K8s Cluster using Fargate service
Create a file naming, cluster.yaml, and specify the name of the fargate profile and namespaces of the master that we want to access.
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: feks-01 // Name of the cluster
region: ap-south-1 // AWS region where to launch the K8s master fargateProfiles: - name: fargate-default selectors: - namespace: kube-system - namespace: default
To create the cluster use command,
eksctl create cluster -f cluster.yaml
( In this output of this command we can see, that it uses CloudFormation to create the infrastructure for Kubernetes cluster )
To verify the cluster is being created, we have two ways:
a) aws eks list-clusters
b) eksctl get cluster
Before going on to the deployment step,
we need to add this cluster credentials so that our local Kubernetes utility would be able to connect to and hence use command as,
aws eks update-kubeconfig --name <clustername>
2. Deploying WordPress application using Helm
Now the question comes, what is Helm? and Why do we need it?
Helm is a tool that is used for a kind of management of the resources required by the application and deploying them in a single click from creating the deployments to Services to PVC or other components included by the application.
Included by the application?
Means, helm uses a format to bind resource files making it like a complete package called Charts.
Through this provides the benefit of managing the Kubernetes resources for that application, it makes the deployment faster.
To use helm, first, we need to initialize it using the command: helm init ( It will launch the tiller application of top of EKS to store the information or files of Chart )
After that add the repository where that chart is located as,
helm init helm repo add bitnami https://charts.bitnami.com/bitnami // Repo for wordpress helm repo list // To see the repo is been added or not
If it failed, then proceed as,
kubectl -n Kube-system create serviceaccount tiller ### To create the service account naming tiller for providing some priviledge power to the container kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller ### To map this service account with the to the respective cluster role or power that wants to assign to that application helm init — service-account tiller ### Re-initialise the container with the new service account kubectl get pods — namespace Kube-system ### To verify if the pod is running or not
To install the WordPress application using helm, use command as
kubectl create namespace wordpress // Creating a new environment for the wordpress application helm install my-release --namespace wordpress bitnami/wordpress
The output of the command, it will show you what resources it has created. And to get the IP of the WordPress site, use the command ( Given in the same output ) as,
This IP is for the Load Balancer created on the AWS through the service of ELB, to verify this we can go to aws GUI or aws-cli
3. Deploy Prometheus and Grafana using the helm
For this, we proceed in the same way as we did for the WordPress application as,
Launching Prometheus,
helm repo add stable https://kubernetes-charts.storage.googleapis.com/ helm repo list kubectl create namespace monitor helm install stable/prometheus --namespace monitor --set alertmanager.persistentVolume.storageClass=”gp2" --set server.persistentVolume.storageClass=”gp2" ### Specifying the volumes type or storage class used to make data persistent
To create connectivity if this Prometheus create a port-forwarding rule using the command,
- kubectl get svc -n monitor ( To get the URL for the Prometheus and for this it uses the ELB service of AWS )
- kubectl -n monitor port-forward svc/<name of the service> 8888:80
Launching Grafana,
helm install stable/grafana — namespace grafana --set \ persistence.storageClassName=”gp2" --set adminPassword=’GrafanaAdm!n’ \ --set datasources.”datasources\.yaml”.apiVersion=1 \ --set datasources.”datasources\.yaml”.datasources[0].name=Prometheus \ --set datasources.”datasources\.yaml”.datasources[0].type=prometheus \ --set datasources.”datasources\.yaml”.datasources[0].url=<URL from ELB of Prometheus, get it from SVC on Kubernetes> \ --set datasources.”datasources\.yaml”.datasources[0].access=proxy \ --set datasources.”datasources\.yaml”.datasources[0].isDefault=true \ --set service.type=LoadBalancer
Now your Prometheus & Grafana both would be in a working state.
Finally, How to delete all the resources in the correct order?
This step is also very important since the service we are using on AWS is not free so if you don't delete them in a proper way it will lead you to increase in charges.
To delete all the resources we have created use command,
a) kubectl delete all --all -n monitor && kubectl delete all --all -n wordpress
b) eksctl delete cluster -f cluster.yaml
To delete the cluster use the same file u have used at the time of creating the cluster.
If you have any doubts, please drop me a message. Thank you.
Software Engineer | NodeJs | TypeScript | MySQL | MongoDB
4 年Nice sir