Amazon Elastic Kubernetes Service with HELM and EFS

Amazon Elastic Kubernetes Service with HELM and EFS

Today I'm going to discuss about Amazon EKS, EFS as volume provider on EKS, Helm, Tiller.

We are going to setup a full Kubernetes cluster using one service of AWS called Elastic Kubernetes Service [EKS] and also launch Joomla-MariaDB setup using Helm and Tiller.

Tools Required:

  • AWS CLI
  • Eksctl
  • Helm
  • Tiller
  • Kubectl

Creating Kubernetes Cluster:

We need one IAM account with Administrator Access for launch EKS. For this, go to your AWS account -> IAM service and then create a new user.

No alt text provided for this image

After this come to your command prompt and run aws configure and give Access Key ID, Secret Access Key...

No alt text provided for this image

We are using Eksctl tool to launch our cluster. It is a simple CLI tool for creating clusters on EKS - Amazon's new managed Kubernetes service for EC2. It uses CloudFormation to do a full setup.

For launching cluster using Eksctl, we need one YAML file.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: lwcluster
  region: ap-south-1

nodeGroups:
   - name: ng1
     desiredCapacity: 5
     instanceType: t2.micro
     ssh:
        publicKeyName: key_name
   - name: ng2
     desiredCapacity: 3
     instanceType: t2.large
     ssh:
        publicKeyName: key_name

You can do change as you need or add more NodeGroup too. Check their official website for more...

After this, run command eksctl create cluster -f cluster.yaml and your full setup is launched.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

You can go and check from the AWS web UI too.

For check your cluster is launched, run...

No alt text provided for this image

Now for updating your Kube-config file, run

No alt text provided for this image

Now you are able to run Kubectl command for launching pods or any other service on the EKS cluster.

Creating one Storage Class for providing storage using EFS:

By default, It creates one Storage Class to provide Persistent Volume using EBS service. So now we are first going to setup our own storage class which using EFS as a storage provider.

  • First, we need to create one AWS Elastic file system. I'm using Web UI for this. Go to your AWS console -> EFS and then create one file system.
No alt text provided for this image
  • At the time of creating, provide the same VPC and security group which is giving to your node by your EKS cluster so that they can connect to each other.
No alt text provided for this image
No alt text provided for this image

After this, your file system is created. Copy File System ID and DNS name for later use.

No alt text provided for this image
  • Now we are using EFS_provisioner to create one Deployment. YAML code for this is below...
kind: Deployment
apiVersion: apps/v1
metadata:
  name: efs-provisioner
spec:
  selector:
    matchLabels:
      app: efs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: efs-provisioner
    spec:
      containers:
        - name: efs-provisioner
          image: quay.io/external_storage/efs-provisioner:v0.1.0
          env:
            - name: FILE_SYSTEM_ID
              value: fs-d1da5000
            - name: AWS_REGION
              value: ap-south-1
            - name: PROVISIONER_NAME
              value: gaurav/nfs-eks
          volumeMounts:
            - name: pv-volume
              mountPath: /persistentvolumes
      volumes:
        - name: pv-volume
          nfs:
            server: fs-d1da5000.efs.ap-south-1.amazonaws.com
            path: /

Do some changes in the above file like value of file_system_ID, server and your provisioner_name...etc. Command for this, kubectl create -f provisioner.yaml

  • After this, we need to create one ClusterRoleBinding file too. This provides permission to EFS_provisioner.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nfs-provisioner-role-binding
subjects:
  - kind: ServiceAccount
    name: default
    namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

Command for running this, kubectl create -f role.yaml

  • After this, you can create your own storage class.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"nfs-eks"},"volumeBindingMode":"WaitForFirstConsumer"}
    storageclass.kubernetes.io/is-default-class: "true"
  name: nfs-eks
provisioner: gaurav/nfs-eks

For this run, kubectl create -f sc.yml

No alt text provided for this image

Now, you can see that one more storage class is created.

  • To making it default, I'm deleting the gp2 storage class.
No alt text provided for this image

Now our EKS is integrated with EFS...??

Helm And Tiller:

Helm is the first application package manager running on top of Kubernetes. It allows describing the application structure through convenient helm-charts and managing it with simple commands.

Helm is a client-side program that allows you to talk with one server-side program named Tiller that lives inside your Kubernetes cluster that is responsible to perform patches and changes to resources you ask to manage.

Download these software from the official sites and set it into path variable.

Now for initializing these, run these commands...

helm init
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
kubectl -n kube-system create serviceaccount tiller 
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade

Now your helm-tiller is ready to use. Helm is also providing a hub from where you can directly install the charts.

  • Now, As I said before, I'm using Joomla chart to launch a full setup on the top of Kubernetes. It also packages the MariaDB chart which bootstraps a MariaDB deployment required by the Joomla! application.
  • Now run these two commands...
helm repo add bitnami https://charts.bitnami.com/bitnami

helm install my-release --set joomlaUsername=admin,joomlaPassword=password,mariadb.mariadbRootPassword=redhat bitnami/joomla

This launch the multiple services and provide you fully running Joomla server with persistent volume.

No alt text provided for this image
No alt text provided for this image

After this, run kubectl get all to check all the services running.

No alt text provided for this image
No alt text provided for this image

You can see that it provides an AWS LoadBalancer IP to Joomla pod to connect the outer world.

No alt text provided for this image
No alt text provided for this image

Don't forget to delete the EKS cluster. As we create one EFS, so first delete the file system. After that, Run the command...

eksctl delete cluster -f cluster.yaml

We can also make the cluster using fargate profile... I'm sharing the code also in my GitHub link...

GitHub Link: https://github.com/gaurav-gupta-gtm/aws-eks-setup

Thanks for reading :)

Do like it if you find it useful... If any query persists, please DM me...

Anudeep Nalla

Opensource Contributer | Platform Engineer | EX-NPCI | RHCA Level III | OpenShift | CEPH | CK{S,A,AD} | 3x Microsoft Certified | AWS CSA | Rancher | Nirmata | DevOps | Ansible | Jenkins | DevSecOps | Kyverno | Rook-Ceph

4 年

good job bro

Anubhav S.

Graduate intern @ Dell | Mtech(CS) @ IIITL

4 年

????

Anubhav Pahwa

DevOps Engineer || AWS Cloud || Certified AWS Solution Architect Associate || AWS Private 5G || Linux || KUBERNETES || IAC || Docker || Observability || Release & Build Engineering

4 年

Great Job????

要查看或添加评论,请登录

Gaurav Gupta的更多文章

社区洞察

其他会员也浏览了