Kubernetes Practical Guide: Creating an EKS Cluster for a Simple App Using Amazon EKS

Kubernetes Practical Guide: Creating an EKS Cluster for a Simple App Using Amazon EKS

INTRODUCTION

Amazon Elastic Kubernetes Service (EKS) is a managed service that makes it easy to run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane. Kubernetes is a powerful orchestration tool for containerized applications, and EKS simplifies the process of deploying, managing, and scaling these applications.

In this practical guide, we will walk you through the steps to create an EKS cluster and deploy a simple three-tier application consisting of a frontend, backend, and database. The main idea is to show how to implement a simple cluster in AKS. The backend and database will be accessible only internally within the cluster, while the frontend will be exposed to users outside the cluster through a load balancer.

The main aim of this article is to guide you through deploying applications on AWS EKS rather than focusing on Kubernetes fundamentals. While we touch on Kubernetes concepts, the primary goal is to demonstrate how to set up your applications using EKS, offering a practical approach to leveraging AWS's managed Kubernetes service.

Step 0: Configure AWS CLI

  1. Make sure you have the AWS CLI installed on your system. You can download it from the AWS CLI installation page.
  2. Run the command

 aws configure         

3. You will be prompted to enter the following details:

  • AWS Access Key ID: Your AWS access key ID.
  • AWS Secret Access Key: Your AWS secret access key.
  • Default region name: The default region you want to use (e.g., us-west-2).
  • Default output format: The output format you prefer (e.g., json, yaml, text).

Example:

AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json        
The values provided for access key and secret key are just an example and not valid

You can verify your configuration by checking the contents by running:

aws configure list        

This configuration allows the AWS CLI to authenticate and interact with your AWS account using the provided credentials and settings.

Step 1: Set Up Amazon EKS?Cluster

There are two different ways to create a cluster: via the AWS Management Console in the EKS service, or through the command line interface (CLI). In this tutorial, we will use the CLI, passing a configuration file to the command.

Please note that this guide assumes you have already set up the necessary IAM users and roles. These steps are beyond the scope of this article. If you are not familiar with them, please refer to the official AWS documentation for instructions on how to complete these prerequisites. Another option is to use the root user or a user with admin privileges. But this is not suitable and advisable for real case scenarios.

The configuration file I will be using is:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: luna-cluster
  region: us-east-1
nodeGroups:
  - name: standard-workers
    instanceType: t3.medium
    desiredCapacity: 2
    minSize: 1
    maxSize: 3
    volumeSize: 20        
Mind that the t3.medium instance type specified in the configuration is not free tier eligible!

This configuration file specifies a cluster named luna-cluster in the us-east-1 region with a single node group named standard-workers. The node group consists of t3.medium instances, with a desired capacity of 2 instances, and can scale between 1 and 3 instances as needed, each with a 20 GB EBS volume.

Then we run the command that will start the process of creating the cluster and the worker nodes.

eksctl create cluster -f cluster-config.yaml        

Step 2: Configure kubectl to Use Your New Cluster:

aws eks --region us-east-1 update-kubeconfig --name luna-cluster        

This command is used to configure the local kubectl command-line tool to interact with an Amazon EKS cluster. What does it do?:

  • Retrieve Cluster Information: The command uses the AWS CLI to fetch information about the EKS cluster specified by -- name luna-cluster in the specified region (region us-east-1).
  • Update kubeconfig File: It updates the local kubeconfig file, typically located at ~/.kube/config, with the necessary details to communicate with the EKS cluster
  • Set context: It sets the current context to the newly added EKS cluster. The context defines which cluster and user to use by default with kubectl

The result should be something similar to this indicating that we have successfully configured kubectl pointing to our newly created cluster:

Added new context arn:aws:eks:us-east-1:000000000000:cluster/luna-cluster to /Users/me/.kube/config        

Step 3: Create Docker images, create repositories, and push them to AWS ECR

We will use AWS ECR to allocate our images. To create them, we use CLI and run these commands one for the backend image and one for the frontend image.

aws ecr create-repository --repository-name frontend        
aws ecr create-repository --repository-name backend        

Once we have created the repositories, we need the Docker images. For simplicity, let's assume you have these Dockerfiles ready. For the database the official Postgres image will be used.

I created a simple backend with a Postgres connection and one endpoint that returns the timestamp from a query to the database. The frontend makes a GET request to the backend and shows the timestamp.
docker buildx build?-?platform linux/amd64,linux/arm64 -t 000000000000.dkr.ecr.us-east-1.amazonaws.com/backend:latest? --?push?.        
docker buildx build?-?platform linux/amd64,linux/arm64 -t 000000000000.dkr.ecr.us-east-1.amazonaws.com/frontend:latest? --?push .        

The above command builds a Docker image for multiple platforms (linux/amd64 and linux/arm64) and directly pushes it to the specified Amazon ECR (Elastic Container Registry) repository. Mind that the image name must have the AWS repository address.

Step 4: Define Kubernetes Manifests

First we will create the deployments. Deployments manage the deployment and scaling of a set of Pods and ensure they run in the desired state. They provide features like rolling updates, rollbacks, and scaling of applications, making it easy to maintain application availability and consistency.

In our example we have three definitions files, one for the backend, one for the frontend, and one for the database.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: backend
        image: 000000000000.dkr.ecr.us-east-1.amazonaws.com/backend:latest
        ports:
        - containerPort: 8080
        env:
        - name: DB_USER
          value: "postgres"
        - name: DB_PASSWORD
          value: "postgres"
        - name: DB_HOST
          value: "postgres-service"
        - name: DB_NAME
          value: "kubernetes_test"
        

As we can see, the environments values are shown directly in the definition file. This is definitely not a good practice. In future articles I will implement Kubernetes Secrets to handle this case.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
        - name: frontend
          image: 000000000000.dkr.ecr.us-east-1.amazonaws.com/frontend:latest
          ports:
            - containerPort: 80
          env:
            - name: BACKEND_URL
              value: "https://backend-service:8080"         

Note how it references https://backend-service:8080. The service for the backend, as well for the database and frontend will be created next.

Now we will create the services. Kubernetes Services provide stable network endpoints to access a set of Pods. They enable communication between different components of an application or with external users, facilitating load balancing and service discovery.

# Backend Service
apiVersion: v1
kind: Service
metadata:
  name: backend-service
spec:
  ports:
  - port: 8080
    targetPort: 8080
  selector:
    app: backend        

Port, is the port on which the Service itself is exposed. External clients send requests to this port and targetPort is the port on the Pod that the Service forwards traffic to.

# Frontend Service
apiVersion: v1
kind: Service
metadata:
  name: frontend-service
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: frontend        

In this case, since we want external users to access this application, we set the type LoadBalancer. This means that an external load balancer is provisioned to expose the Service to the internet. This type of Service directs traffic from the external load balancer to the Pods running in the cluster, providing a single IP address that external clients can use to access the application. This is typically used to make an application accessible outside of the Kubernetes cluster.

# Database Service
apiVersion: v1
kind: Service
metadata:
  name: postgres-service
spec:
  ports:
  - port: 5432
    targetPort: 5432
  selector:
    app: postgres        

Step 5: Apply Kubernetes Manifests

In order to apply the definition files, we run these commands. The argument -f is the path to where is located the file.

kubectl apply -f frontend-deployment.yaml
kubectl apply -f frontend-service.yaml
kubectl apply -f backend-deployment.yaml
kubectl apply -f backend-service.yaml
kubectl apply -f database-deployment.yaml
kubectl apply -f database-service.yaml        

You can verify the services and pods status using:

kubectl get pods
kubectl get services        

To test your app deployed on EKS, use kubectl get services to find the service's external IP or DNS. Copy the URL provided, paste it into your web browser, and hit enter to access and test your deployed application. This verifies that your app is running correctly and reachable via the service exposed on Kubernetes.?

CONCLUSION

Finally, to avoid ongoing charges in AWS after setting up a Kubernetes cluster with Amazon EKS, you'll need to delete several resources:

aws eks delete-cluster --name <cluster-name>
aws eks delete-nodegroup --cluster-name <cluster-name> --nodegroup-name <nodegroup-name>        

Verify if there are any other resources related to your EKS setup, such as S3 buckets, EBS volumes, or logs, and delete them if not needed.

By following the outlined steps, you can set up a kubernetes cluster and even if it is a simple example sets the stage for more advanced learning and scaling, making it easier to continue expanding your Kubernetes skills in AWS EKS.


要查看或添加评论,请登录

Juan Ignacio Benito的更多文章

社区洞察

其他会员也浏览了