Running your first Kubernetes workload in AWS with EKS

Running your first Kubernetes workload in AWS with EKS

I have been using Kubernetes for about a year and a half, but through all of that time I've only ever deployed workloads to on-premise Kubernetes clusters (or my local laptop).

Currently I'm working on an internal project to spin up a containerised Node.js application in Kubernetes so that I can experiment with different observability approaches. As part of this, I want to deploy my application onto the public cloud. I chose EKS because I have access to the company AWS account, but I'd be curious to try AKS and GKE in the future.

In this article I share how I got my workload up and running in Kubernetes. My objectives here were:

  • To spin up an EKS cluster
  • To deploy an application (that I wrote) into my EKS cluster
  • To be able to access the app over the internet
  • And to do as much of this as possible from the terminal (CLI) to enable automation of the process later on

What is EKS?

The Amazon Elastic Kubernetes Service (EKS) is a cloud native implementation of Kubernetes hosted on AWS. It allows you to create and operate Kubernetes clusters using native AWS, without needing to manage the underlying infrastructure.

Pre-requisites

Before you begin you will want to have a basic understanding of containers and Kubernetes. I would strongly suggest making sure you can deploy your workload into a local Kubernetes cluster on your workstation (using a utility like Docker Desktop, minikube, or kind).

To do anything in AWS you will need an account. I do not know the exact details of what role(s) you need to operate an EKS cluster, my account had administrator access. If your organisation locks down AWS for security reasons, you may run into roadblocks.

You will also need to install the following tools:

  • The AWS CLI (command line interface).
  • kubectl (the Kubernetes command-line tool)
  • eksctl (the EKS command-line tool) which provides a simpler interface to create and manage your EKS cluster. If you are using a Mac I would recommend using Homebrew to install it.

Step 1: Connect your terminal to AWS

This is the one thing I needed the AWS web console for. Log into your AWS account using a web browser. Pick your account and role and click on "command line or programmatic access":

No alt text provided for this image

You will be provided a set of commands to paste into your terminal which will configure the AWS CLI and eksctl so they can authenticate with this account. If you are on Windows, make sure to click the "PowerShell" tab because the notation is different:

No alt text provided for this image

Copy those three commands to the clipboard and paste them into your terminal or PowerShell session, then submit them. All this does is set three environment variables on your workstation, so to make sure everything is working try the command...

aws sts get-caller-identity        

You should see a JSON response with some information about your account and role:

No alt text provided for this image

If you've made it this far, congrats. Your terminal session is now able to connect to AWS.

Step 2: Create your EKS cluster

In a real production situation you would want to configure your EKS cluster to meet your specific needs. In my case, I just want a basic cluster to experiment with, so I'm going to leave everything as default.

You can create your cluster with just one command (replace "dpm" with whatever you want to call your cluster and "ap-southeast-2" with whatever region you are working in):


eksctl create cluster --name dpm --region ap-southeast-2        

NOTE: EKS has two different methods for running container workloads. The first is using managed EC2 which is similar to spinning up virtual machines to host your Kubernetes nodes. The other more recent approach is to use AWS Fargate which is a serverless compute platform. I thought that Fargate would be a nice simple option for I was trying to do, but I ran into two issues:

  • It's slow. It took >25 minutes to create my cluster, and frequently 2-3 minutes or more to create resources (e.g. pods).
  • I was unable to access my app over the internet. To do this using EKS + Fargate there are some extra steps you need to take, which I wasn't able to figure out.

Because of this, I've used the managed EC2 approach. Despite the documentation mentioning that you need to manage your EC2 hosts yourself, for what we're trying to achieve here, we won't need to touch this configuration.

Run the (adjusted) command above to create your EKS cluster. It will take some time, maybe 10-15 minutes. Go make a cup of coffee while you wait or play with your dog(s) and or cat(s).

Running this command will also configure kubectl to point to this cluster.

Step 3: Deploy your workload to EKS

You define Kubernetes resources in a YAML file, also known as a Kubernetes manifest. I had already written a Kubernetes manifest to deploy my service locally, but I found it wouldn't deploy into EKS.

After following several tutorials online and referring to examples, I found the following Kubernetes manifest successfully deployed my app to EKS and made it accessible over the internet:


apiVersion: v1
kind: Service
metadata:
  name: datapool-manager-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
  namespace: dpm
  labels:
    app: datapool-manager
spec:
  type: LoadBalancer
  selector:
    app: datapool-manager
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9192
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: datapool-manager-deploy
  namespace: dpm
  labels:
    app: datapool-manager
spec:
  replicas: 1
  selector:
    matchLabels:
      app: datapool-manager
  template:
    metadata:
      labels:
        app: datapool-manager
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                - amd64
                - arm64
      containers:
      - name: datapool-manager-container
        image: thekiwisre/datapool-manager:latest
        ports:
        - containerPort: 9192        

There's a few points to mention here:

  • I am deploying two resources. The first is a load balancer service which exposes my app to the internet. The second is a pod which runs my Docker container.
  • My application runs on port 9192. Replace that with whatever port your service or application is exposed on.
  • There is an AWS specific annotation in the service which may be necessary.

NOTE: I created my Docker container on an M1 chip Mac. I initially ran into issues because it creates an image not compatible with AMD64 architecture (the Kubernetes default). To ensure compatibility with Kubernetes I had to build my Docker image with the following command:


docker buildx build --push --platform=linux/amd64 -t thekiwisre/datapool-manager .        

You'll notice in my manifest I specifically define that these resources will be deployed to a namespace called "dpm" (short for "Datapool Manager"). Let's create that namespace in our EKS cluster:


kubectl create namespace dpm        

Then to deploy these two resources to the EKS cluster use the same command as you would to any other Kubernetes cluster:


kubectl apply -f ./kubernetes/dpm-manifest.yaml        

Wait for your pod to be in the "Running" status using the command (you can add the -w argument if you want to watch/wait for changes in status):


kubectl get nodes -n dpm        

Once it's running we can check if the app is accessible over the internet. Run the following command:


kubectl get services -n dpm        

Copy the EXTERNAL-IP of the service:

No alt text provided for this image

Even after the service has deployed it may take 2-3 minutes or more for the app to be accessible over the internet, so be patient. Paste this into a browser with "https://" in front of it (or https if your service is over TLS/SSL) along with the path to your service. For example, my app was accessible at:


https://aff082e20c1a343888b3da8ac517c398-1084121075.ap-southeast-2.elb.amazonaws.com/DPM/STATUS        

I can see my app is running and accessible over the internet:

No alt text provided for this image

That wasn't so hard after all!

Don't forget to delete the cluster when you're finished working (to avoid unnecessary charges on your AWS bill):


eksctl delete cluster --name dpm --region ap-southeast-2        

Summary

There were really just two things I needed to do differently to deploy in EKS compares to working with my local Kubernetes cluster:

  • Configuring and using eksctl to create the cluster
  • Adjusting my Kubernetes manifest (if required) for compatibility with EKS

There's a lot of detail I'd like to explore later, such as how to configure my hosts and nodes, but that's a challenge for another day.

要查看或添加评论,请登录

Stephen Townshend的更多文章

  • Monitoring your Mac with Prometheus

    Monitoring your Mac with Prometheus

    A few weeks ago I was exploring SquaredUp Cloud which is an dashboarding and visibility platform that lets you connect…

    6 条评论
  • Containerising a Node.js app

    Containerising a Node.js app

    As a Developer Advocate, I need to keep my technical skills up to date and to practice what I preach. One way I'm doing…

  • A Year as an SRE

    A Year as an SRE

    A bit over a year ago I transitioned from performance engineering into the world of Site Reliability Engineering (SRE).…

    7 条评论
  • The HTTP Protocol (explained)

    The HTTP Protocol (explained)

    What's this all about? A few years ago, I started writing a book about performance engineering. I only finished a rough…

    6 条评论
  • Running Grafana & Prometheus on Docker

    Running Grafana & Prometheus on Docker

    We're in the process of standing up a monitoring platform on Kubernetes. Before we started this process I had very…

    11 条评论
  • Is cloud computing killing performance testing?

    Is cloud computing killing performance testing?

    I 've received a few messages recently from individuals concerned that performance testing is "on the decline". The…

    17 条评论
  • Wrapping up 13 years of performance engineering

    Wrapping up 13 years of performance engineering

    Thirteen years ago, I fired off my CV to a few dozen organisations looking for my first job in IT. Months later, after…

    9 条评论
  • Performance Engineer to SRE?

    Performance Engineer to SRE?

    Two months ago I transitioned from a performance engineer to a site reliability engineer (SRE). It's been terrifying at…

    21 条评论
  • Before you automate your performance testing…

    Before you automate your performance testing…

    This year I’ve been working in a large program of work. My role is to oversee the performance testing and engineering…

    14 条评论
  • What I will miss about Visual Studio Load Test

    What I will miss about Visual Studio Load Test

    With Microsoft's announcement that they will be discontinuing Visual Studio's load test features after Visual Studio…

    10 条评论

社区洞察

其他会员也浏览了