AWS EKS

AWS EKS

Hello Everyone , let me first tell you what is EKS service of the AWS.

EKS stands for Elastic Kubernetes Service, which is an Amazon offering that helps in running the Kubernetes on AWS without requiring the user to maintain their own Kubernetes control plane. It is a fully managed service by Amazon. Amazon EKS automatically detects and replaces unhealthy control plane instances, restarting them across the Availability Zones within the Region as needed. Amazon EKS leverages the architecture of AWS Regions in order to maintain high availability

No alt text provided for this image

From here lets start the AWS EKS task where i will show certain use cases which i have applied. Here i applied 4 use cases which are as follows -

Lets begin :-

1. CREATING A KUBERNETES CLUSTER USING AWS EKS :-

As i have created a kubernetes cluster using AWS EKS . I am providing link where you can refer that .- Elastic Kubernetes Services .

No alt text provided for this image

The name of the cluster yaml file is cluster.yml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: lwcluster
  region: ap-south-1

nodeGroups:
  - name: ng1
    desiredCapacity: 1
    instanceType: t2.micro
    ssh:
        publicKeyName: myredhatkey
  - name: ng2
    desiredCapacity: 1
    instanceType: t2.small
    ssh:
        publicKeyName: myredhatkey
No alt text provided for this image

2. DEPLOYING WORDPRESS AND MYSQL MULTI-TIER ARCHITECTURE ON TOP OF THE EKS CLUSTER :-

Here i first created a cluster with a public key attached in the yaml file then i created a cluster with a eksctl command.

eksctl create cluster -f cluster.yml

eksctl get cluster

kubectl get nodes

kubectl get all 

The cluster.yml file is -

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: lwcluster
  region: ap-south-1

nodeGroups:
  - name: ng1
    desiredCapacity: 1
    instanceType: t2.micro
    ssh:
        publicKeyName: myredhatkey
  - name: ng2
    desiredCapacity: 1
    instanceType: t2.small
    ssh:
        publicKeyName: myredhatkey
  - name: ngmixed
    minSize: 2
    maxSize: 3
    instancesDistribution:
      maxPrice: 0.017
      instanceTypes: ["t3.small", "t3.medium"] # At least one instance type should be specified
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 50
      spotInstancePools: 2
    ssh:
        publicKeyName: myredhatkey 

The web-pvc.yml file is-

apiVersion: v1
kind: PersistentVolumeClaim

metadata:
  name: lwpvc1

spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
 
No alt text provided for this image

After the nodes and the cluster gets created , check for the nodes or for the service to get the dns and copy then paste in the browser either from the cli or from the dashboard. The website works on the type LoadBalancer and on pvc due to which our data remain persistent and traffic remains distributed. Here i used a kustomization file with secret keys.

kubectl create -k .

Each time i open my website the data remain persistent and there is no need to login again and again because my data is saving in cloud pv storage .

No alt text provided for this image
No alt text provided for this image

3. USING HELM PACKAGE MANAGER TOOL TO LAUNCH PROMETHEUS AND GRAFANA :-

What is helm ? Helm is a tool that streamlines installing and managing Kubernetes applications. Helm is the first application package manager running atop Kubernetes. It allows describing the application structure through convenient helm-charts and managing it with simple commands. Helm is important because it's a huge shift in the way the server-side applications are defined, stored and managed.

Before launching prometheus and grafana we need to initialize helm and run some following commands to install tiller. Now what is tiller ? Tiller is the service that actually communicates with the Kubernetes API to manage our Helm packages. A companion server component, tiller , that runs on your Kubernetes cluster, listens for commands from helm , and handles the configuration and deployment of software releases on the cluster.

helm init

helm repo add stable https://kubernetes-charts.storage.googleapis.com/

helm repo list

helm repo update

No alt text provided for this image

For the configuration of the tiller, we need to run following commands.

kubectl -n kube-system create serviceaccount tiller 

kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller 

helm init --service-account tiller

kubectl get pods --namespace kube-system

Hence , tiller configured.

No alt text provided for this image

Since tiller is configured , now we need to launch and install the prometheus and grafana.

Now we need to create a namespace for the prometheus and then installing the prometheus using helm . After the prometheus is installed we need to set the persistent volume storage class to the prometheus and create a service for that and expose .

Here aws will do all the work for the prometheus as it automatically fetches the logs form our nodes and make them as its targets there is no need to do anyhing else.After exposing we need to create the patting and post forwarding the service port to port=80 by the following commands.

kubectl create namespace prometheus

helm install  stable/prometheus     --namespace prometheus     --set alertmanager.persistentVolume.storageClass="gp2"     --set server.persistentVolume.storageClass="gp2"

kubectl get svc -n prometheus

kubectl -n prometheus  port-forward svc/flailing-buffalo-prometheus-server  
8888:80

After that fetch the ip and the port and paste them in the browser and boom your prometheus server will get launched.

No alt text provided for this image
No alt text provided for this image

Now we need to create a namespace for the grafana and then installing the grafana using helm . After the grafana is installed we need to set the persistent volume storage class to the grafana and create a service for that and expose .

Then the aws will do all our work it automatically fetched the data from the prometheus server and by default they create the graphs and visuals for us .

After linking the grafana with the prometheus we need to run the -o yaml file which creates the yaml file for us and then the service command which gives us the external ip for the grafana server.

We can get the grafana dashboard by the following commands.

kubectl create namespace grafana

helm install stable/grafana  --namespace grafana     --set persistence.storageClassName="gp2" --set adminPassword='GrafanaAdm!n'    --set datasources."datasources\.yaml".apiVersion=1     --set datasources."datasources\.yaml".datasources[0].name=Prometheus   --set datasources."datasources\.yaml".datasources[0].type=prometheus    --set datasources."datasources\.yaml".datasources[0].url=https://prometheus-server.prometheus.svc.cluster.local   --set datasources."datasources\.yaml".datasources[0].access=proxy     --set datasources."datasources\.yaml".datasources[0].isDefault=true  --set service.type=LoadBalancer

kubectl get  secret  worn-bronco-grafana   --namespace  grafana  -o yaml



No alt text provided for this image
No alt text provided for this image

Hence , prometheus and the grafana server gets configured.

4. CONFIGURING THE FARGATE CLUSTER :-

AWS Fargate is a compute engine for Amazon ECS and EKS that allows you to run containers without having to manage servers or clusters.

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate dynamically creates a node fargate which makes it easy for you to focus on building your applications.

For creating a fargate cluster a yml scrpit is needed to run which creates a cluster and you dont have to do anything aws dynamically manages everything for you.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: akansh-fargate-lwcluster
  region: ap-southeast-1


fargateProfiles:
  - name: fargate-default
    selectors:
     - namespace: kube-system
     - namespace: default

To create a fargate cluster run the following commands,

kubectl get ns

kubectl get pods -n kube-system -o wide

eksctl create cluster -f fargate-cluster.yml

eksctl get cluster --region ap-southeast-1

You can run the fargate cluster in any region . since i ran it in singapore region thats why i defined it there.

No alt text provided for this image
No alt text provided for this image

Hence fargate cluster created.

If anything anyone wants to ask or suggest feel free to ping me up.

Below i am providing my github link all the codes have been uploaded there.

Thank You

Github link-





Ankit Kumar Pal

Research Analyst | Data-Driven Insights | Expertise in Data Analysis, SQL, Power BI

4 年

Great job ??

Great well done??

要查看或添加评论,请登录

Akansh Agarwal的更多文章

社区洞察

其他会员也浏览了