Amazon Web Services - Elastic Kubernetes Service & Elastic File System for your next great idea.
Pranav Shekhar
Deloitte US India | MLOps | DevOps - AL | Web Dev - MERN | Flutter & Firebase | Cloud - Hybrid & Multi | GAIT - AIR 50
This article is about creating a completely scalable and deployed industry ready application on the top of a Kubernetes cluster with the help of Amazon EKS Service for managing and orchestrating our containerized applications inside our Kubernetes pods. This managed service(EKS) makes it easy for you to run Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
In simpler words, are you a non technical person who cannot hire a cloud operations? Do you have a startup idea? Then believe is the thing for you. Just deploy your application on EKS and have a good sound sleep without any worries to scale, balance multiple client requests and production fails at all. Let me walk you through all the steps - how to claim your idea into the next big thing.
So we will be following the above workflow step by step to get our start up idea going. Firstly we will require to configure our AWS Command Line Interface with an IAM user with root privileges to access our AWS EKS cluster. So just go to 1. aws.amazon.com > 2. Login with root user or create a new root account > 3. Services >4. IAM >5. Users > 6. Create a new user.
Use a customized password and just go next and when you see the window below wait and select > 7. Attach Existing Policies > and > 8. Tick Administrator Access >9. Next Next & Next Create User.
Now the next step is configuring the credentials of the user on AWS Command Line Interface, here is the Windows msi file link :- https://awscli.amazonaws.com/AWSCLIV2.msi
Just Download this zip file and follow the default next next & next installation wizard. Once you are done check your aws cli version with the command :-
C:\Users\KIIT>aws --version aws-cli/2.0.17 Python/3.7.7 Windows/10 botocore/2.0.0dev21
Now we are good to go to configure our user and manage our cloud account with our local cli, the commands to achieve the same are :-
C:\Users\KIIT>aws configure AWS Access Key ID [****************EKOD]: AWS Secret Access Key [****************IgPB]: Default region name [ap-south-1]: Default output format [json]:
>10. Copy and Paste your credentials of your IAM user and your configuration is good to go, the next thing we need is client server programs to make requests to our EKS clusters running on AWS EKS service. We will use eksctl as a client service program to achieve our goal. So how to install eksctl:-
First we need to install chocolatey which is a software management solution unlike anything else you've ever experienced on Windows.
Use this Document which will easily walk you through for chocolatey installation which you must have if you are using windows :- https://chocolatey.org/install
Now for eksctl use the command on Windows Power shell in Admin mode:-
PS C:\WINDOWS\system32> chocolatey install eksctl -y PS C:\WINDOWS\system32> chocolatey install kubectl -y
We also installed kubectl which is one of the most powerful ways to manage any Kubernetes cluster using the client program.
Now we got place both executable files in the same folder :- For me it is C:\Users\KIIT\kube - I will use the same path to set the environment variable which is
Search on Windows env > Edit environment variables for your account and copy paste the above path:-
And we will also put the eksctl folder in the .kube folder of kubectl :- For me it is C:\Users\KIIT\.kube - Take care of the . in kube.
Now we are completely ready to launch are very own cluster on the top of AWS EKS in one go, what we will be doing is launching Drupal which is a Content Management Software for digitizing your own solutions which uses MySQL Database on the back to deliver content writers. You can use your own website or application with same procedure I am following just upload your app or website image to Docker Hub and see the magic of Kubernetes, See my Docker Project which you will find below how to upload your application image on Docker Hub.
First we will be creating our clusters, using standard Linux instances on two Node Groups, this the cluster1.yml file:-
Note:- mykey1111 is a key pair I created in EC2 Services on AWS to access our Node Groups instances via ssh remote login.
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: mycluster region: ap-south-1 nodeGroups: - name: ng1 desiredCapacity: 2 instanceType: t2.micro ssh: publicKeyName: mykey1111 - name: ng-mixed minSize: 1 maxSize: 2 instancesDistribution: maxPrice: 0.015 instanceTypes: ["t3.small", "t2.micro"] onDemandBaseCapacity: 0 onDemandPercentageAboveBaseCapacity: 50 spotInstancePools: 2 ssh: publicKeyName: mykey1111
Save the file on Notepad & Run the command below to launch our cluster on AWS EKS Managed Service :-
C:\Users\KIIT\Desktop\eks\task> eksctl create cluster -f cluster.yml
Wait for 10-15 mins as it will take some time to shift your application to AWS EKS as a completely managed service by Amazon. After the cluster is created you will need some files to deploy your services as Infrastructure as Code :- Search for EKS in Services on AWS Dashboard and check for your created cluster .
Now we will update the kubeconfig file to manage the created cluster using kubectl client service program which anyone using Kubernetes is more used to:-
C:\Users\KIIT\Desktop\eks\task>aws eks update-kubeconfig --name mycluster C:\Users\KIIT\Desktop\eks\task>kubectl config view
Now something new comes to our discussion we will use a completely centralized storage for our data inside our application for which we use NFS Server in Linux which stands for Network File System the similar concept in the AWS Cloud World is known as EFS - Elastic File System, the major advantage of using EFS over Block storage is it is independent of the environment of running instances and all our data is at one place and easily manageable and is not lost using Persistent Volume Claims created on the fly dynamically using non temporary volumes :-
Go to your AWS Dashboard Services > Search for EFS > Create an EFS > Next and Next to all the Default options and Create an EFS.
Now we need to login to our instances remotely and install efs-utils package from AWS on our running EC2 Linux instances with their public IPs.
ssh -i mykey1111.pem -l ec2-user 192.168.61.138 [ec2-user@ip-192-168-61-38 ~]$ sudo yum install amazon-efs-utils -y
We will do the same for all our 3 running instances :-
Now we will create our Deployment YAML files for our Application :-
This the EFS Provisioner File to provide our application with the EFS centralized storage we created for keeping our data at one managed place.
efsprovision.yml
kind: Deployment apiVersion: apps/v1 metadata: name: efs-provisioner1 spec: selector: matchLabels: app: efs-provisioner1 replicas: 1 strategy: type: Recreate template: metadata: labels: app: efs-provisioner1 spec: containers: - name: efs-provisioner1 image: quay.io/external_storage:v0:1.0 env: - name: FILE_SYSTEM_ID value: fs-c871e419 - name: AWS_REGION value: ap-south-1 - name: PROVISIONER_NAME value: efs-storage1 volumeMounts: - name: nfs-volume1 mountPath: /persistentvolumes volumes: - name: nfs-volume1 nfs: server: fs-c871e419.efs.ap-south-1.amazonaws.com path: /
This is our storage.yml file to provide persistent volumes storage to our running pods over the instances :-
storage.yml
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: aws-efs5 provisioner: efs-storage1 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: efs-drupal-3 annotations: volume.beta.kubernetes.io/storage-class: "aws-efs5" spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: efs-mysql-3 annotations: volume.beta.kubernetes.io/storage-class: "aws-efs5" spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi
Next is our rbac.yml file to add security and cluster admin role to the rbac user.
Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within an enterprise. We’re going to explore Kubernetes RBAC by creating an IAM user called rbac-user4 who is authenticated to access the EKS cluster but is only authorized (via RBAC) to list, get, and watch pods and deployments in the ‘myns1’ namespace. Below is the rbac.yml file:-
--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nfs-provisioner-role-binding-user4 subjects: - kind: ServiceAccount name: default namespace: myns1 roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io
And Finally come our Frontend - Drupal and Backend - MySQL Deployment files for our Content Management Service to be launched in one go :-
mysql.yml - For Drupal Backend.
apiVersion: v1 kind: Service metadata: name: drupal-db1 labels: app: drupal1 spec: ports: - port: 3306 selector: app: drupal1 tier: backend --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: drupal-pvc-db-4 spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: drupal-mysql1 labels: app: drupal spec: replicas: 1 selector: matchLabels: app: drupal1 tier: backend strategy: type: Recreate template: metadata: labels: app: drupal1 tier: backend spec: containers: - image: mysql:5.6 imagePullPolicy: IfNotPresent name: mysql1 env: - name: MYSQL_DATABASE value: drupal-db1 - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-pass5 key: password - name: MYSQL_USER value: pranav - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: mysql-pass4 key: password ports: - containerPort: 3306 name: mysql1 volumeMounts: - name: mysql-stateful-storage mountPath: /var/lib/mysql volumes: - name: mysql-stateful-storage persistentVolumeClaim: claimName: drupal-pvc-db-4
drupal.yml - For Drupal Frontend
--- apiVersion: v1 kind: Service metadata: name: drupal1 labels: app: drupal1 spec: ports: - name: http port: 80 protocol: TCP selector: app: drupal1 tier: frontend type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: drupal1 labels: app: drupal1 spec: selector: matchLabels: app: drupal1 tier: frontend strategy: type: Recreate replicas: 1 template: metadata: labels: app: drupal1 tier: frontend spec: containers: - image: drupal:8-apaache name: drupal1 env: - name: DRUPAL_DB_HOST value: drupal-mysql1 - name: DRUPAL_DB_PASSWORD valueFrom: secretKeyRef: name: mysql-pass5 key: password - name: DRUPAL_DB_USER value: pranav@1309 - name: DRUPAL_DB_NAME value: drupal-mysql1 ports: - containerPort: 80 name: drupal1 volumeMounts: - name: drupal-persistent-storage-4 mountPath: /var/www/html volumes: - name: drupal-persistent-storage-4 persistentVolumeClaim: claimName: efs-drupal-user2
kustomization.yml - For Launching our Complete Infrastructure as Code in one click:-
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization secretGenerator: - name: mysql-pass5 literals: - password=linuxtrevolds resources: - efsprovison.yml - rbac.yml - storage.yml - mysql.yml - drupal.yml
Our Final Commands :-
C:\Users\KIIT\Desktop\eks\task>kubctl create namespace myns1 C:\Users\KIIT\Desktop\eks\task>kubctl config set-context --current --namespace=myns1 C:\Users\KIIT\Desktop\eks\task>kubctl create -k . -n myns1
Our Complete Infrastructure is launched in one go :-
On connecting to our public service IP :-
And we are Done !! .
?