Creating Complete AWS EkS Cluster using eksctl tool and launching Website

Creating Complete AWS EkS Cluster using eksctl tool and launching Website

What is Amazon-EKS ?

Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service.

Customers such as Intel, Snap, Intuit, GoDaddy, and Autodesk trust EKS to run their most sensitive and mission critical applications.

Benefits - High Availability, Secure, Serverless option, Built with the Community.

Objective :

In this tasks we will see about the EKS and its uses case how it use and how it configure what will be the processor for creating such kind of cluster and how many type of cluster we can make.and then after this we will Integrate it with EBS,EFS and ELB.After doing Integration we can launch a pod that will be Wordpress with MySQL .for this site we first have to configure MySQL and then Wordpress.After this we can view the HELM usecase of HELM How to configure it and many more .After this we can launch a prometheus server on the eks and then integrate it with grafana . and at last we will about the Farget-Cluster.

Tools Used:

  • AWS CLI
  • Eksctl
  • Helm
  • Tiller
  • Kubectl


Solution :

So let's look at the practical hands-on part also simultaneously -

cluster.yml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig


metadata:
  name: lwcluster
  region: ap-south-1


nodeGroups:
   - name: ng1
     desiredCapacity: 2
     instanceType: t2.micro
     ssh:
        publicKeyName: mykey
   - name: ng2
     desiredCapacity: 1
     instanceType: t2.small
     ssh:
        publicKeyName: mykey
   - name: ng-mixed
     minSize: 2
     maxSize: 5
     instancesDistribution:
       maxPrice: 0.017
       instanceTypes: ["t3.micro", "t3.small"] # At least one instance type should be specified
       onDemandBaseCapacity: 0
       onDemandPercentageAboveBaseCapacity: 50
       spotInstancePools: 2     
     ssh:
        publicKeyName: mykey

         

Instances created behind the scene on AWS :

No alt text provided for this image

Spot-Instances created behind the scene on AWS :


No alt text provided for this image

Note here Docker is already running because of EKS cluster. Also note here that we can limited pods depending upon instance type and how many network card are attached to it.

For example, In t2.micro only 4 pods can be launched while in t2.small 11 pods can be launched and so on for others instance types too.

Can refer below to know more about instance type & how many pods we can launch in it.

No alt text provided for this image

Now Let's talk about AWS Fargate that provides us serverless architecture :-

AWS Fargate is a serverless compute engine for containers that works with both Amazon ECS and Amazon EKS. It makes it easy for us to focus on building and operating our applications whether we are running it with ECS or EKS. Using Fargate we can achieve rich observability of applications.

fcluster.yml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig


metadata:
  name: far-lwcluster
  region: ap-southeast-1


fargateProfiles:
  - name: fargate-default
    selectors:
     - namespace: kube-system

      
     - namespace: default

No alt text provided for this image

Whenever we run eksctl command, it is just an automation program. Behind the scene they create some code & send this code to Cloud Formation. And actually Cloud Formation is the one which does everything for us.

Cloud Formation is the one which contact to VPC and create subnet for us. Cloud Formation is the one which contacts to EC2 and launch instance for us and many more things they perform for us behind the scene.

In AWS, we have independent services for everything and if one service would like to communicate with the other service, they require some power or permission. And this kind of permission is known as ROLE.

After cluster configured if we want to access our cluster, as a client (here using eksctl) we must login to AWS which has the power to do so.

No alt text provided for this image
No alt text provided for this image


Why to use EFS instead of EBS ?

It is to be noticed that if we use EBS for persistent storage then we will end up in trouble. Since EKS launch the pods in different DataCenters and we can attach EBS to those only that are in the same DataCenters. So it is recommended to use EFS.

Amazon EFS is a regional service for high availability and durability.

Implemented part shown below -

No alt text provided for this image
No alt text provided for this image

Here's the code of the task performed & its output :-

create-efs-provisioner.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: efs-provisioner
spec:
  selector:
    matchLabels:
      app: efs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: efs-provisioner
    spec:
      containers:
        - name: efs-provisioner
          image: quay.io/external_storage/efs-provisioner:v0.1.0
          env:
            - name: FILE_SYSTEM_ID
              value: fs-c1b83210
            - name: AWS_REGION
              value: ap-south-1
            - name: PROVISIONER_NAME
              value: himanshu/nfs-eks
          volumeMounts:
            - name: pv-volume
              mountPath: /persistentvolumes
      volumes:
        - name: pv-volume
          nfs:
            server: fs-c1b83210.efs.ap-south-1.amazonaws.com
           path: /

create-rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nfs-provisioner-role-binding
subjects:
  - kind: ServiceAccount
    name: default
    namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

create-storage.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nfs-efs
provisioner: himanshu/nfs-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: efs-wordpress
  annotations:
    volume.beta.kubernetes.io/storage-class: "nfs-efs"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: efs-mysql
  annotations:
    volume.beta.kubernetes.io/storage-class: "nfs-efs"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

deploy-mysql.yaml

apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: efs-mysql

deploy-wordpress.yaml

apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  type: LoadBalancer
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: efs-wordpress

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator: 
  - name: mysecurebox
    literals: 
    - password=bGludXg=
resources: 
  - create-efs-provisioner.yaml
  - create-rbac.yaml
  - create-storage.yaml
  - deploy-mysql.yaml
  - deploy-wordpress.yaml

OUTPUT :-

No alt text provided for this image
No alt text provided for this image





No alt text provided for this image






What is HELM ?

Helm is a package manager of K8S which helps us to manage kubernetes applications. It is graduated project in the CNCF and is maintained by the Helm Community.

Helm Charts help us define, install, and upgrade even the most complex Kubernetes application. Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.

Here are the list of commands I performed as mentioned below -

helm init

helm repo add stable https://kubernetes-charts.storage.googleapis.com/

helm repo list

helm search -l

kubectl create ns lw1

helm get pods -n kube-system

kubectl -n kube-system create serviceaccount tiller

kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

helm init --service-account tiller --upgrade

helm install --name my-release stable/jenkins -n lw1

helm install stable/prometheus --namespace prometheus --set alertmanager.persistentVolume.storageClass="gp2" --set server.persistentVolume.storageClass="gp2"

kubectl -n prometheus port-forward svc/listless-boxer-prometheus-server 8888:80

helm install stable/grafana --namespace grafana  --set persistence.storageClassName="gp2" --set adminPassword=redhat --set service.type=LoadBalancer


Prometheus :- An open source monitoring system developed by engineers at SoundCloud in 2012. In order to see get the Prometheus dashboard here, we have to perform port forwarding.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

To use port forwarding run following commands:-

kubectl get svc -n prometheus

kubectl -n prometheus port-forward svc/dull-bumblebee-prometheus-server  88

To install grafana in its namespace:-

helm install grafana/stable  --namespace grafana  --set persistence.storageClassName="gp2"  --set adminPasswod=redhat  --set service.type=LooadBalancer


No alt text provided for this image
No alt text provided for this image








To use port forwarding in grafana:-

kubectl get svc -n grafana

kubectl -n grafana  port-forward svc/exasperated-seal-grafana  1234:80

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

At last we must delete our cluster, because EKS is not a free service provided by AWS

No alt text provided for this image

Thank You for reading this Article :)

要查看或添加评论,请登录

Sunil Maharana的更多文章

社区洞察