Running your first Kubernetes workload in AWS with EKS
I have been using Kubernetes for about a year and a half, but through all of that time I've only ever deployed workloads to on-premise Kubernetes clusters (or my local laptop).
Currently I'm working on an internal project to spin up a containerised Node.js application in Kubernetes so that I can experiment with different observability approaches. As part of this, I want to deploy my application onto the public cloud. I chose EKS because I have access to the company AWS account, but I'd be curious to try AKS and GKE in the future.
In this article I share how I got my workload up and running in Kubernetes. My objectives here were:
What is EKS?
The Amazon Elastic Kubernetes Service (EKS) is a cloud native implementation of Kubernetes hosted on AWS. It allows you to create and operate Kubernetes clusters using native AWS, without needing to manage the underlying infrastructure.
Pre-requisites
Before you begin you will want to have a basic understanding of containers and Kubernetes. I would strongly suggest making sure you can deploy your workload into a local Kubernetes cluster on your workstation (using a utility like Docker Desktop, minikube, or kind).
To do anything in AWS you will need an account. I do not know the exact details of what role(s) you need to operate an EKS cluster, my account had administrator access. If your organisation locks down AWS for security reasons, you may run into roadblocks.
You will also need to install the following tools:
Step 1: Connect your terminal to AWS
This is the one thing I needed the AWS web console for. Log into your AWS account using a web browser. Pick your account and role and click on "command line or programmatic access":
You will be provided a set of commands to paste into your terminal which will configure the AWS CLI and eksctl so they can authenticate with this account. If you are on Windows, make sure to click the "PowerShell" tab because the notation is different:
Copy those three commands to the clipboard and paste them into your terminal or PowerShell session, then submit them. All this does is set three environment variables on your workstation, so to make sure everything is working try the command...
aws sts get-caller-identity
You should see a JSON response with some information about your account and role:
If you've made it this far, congrats. Your terminal session is now able to connect to AWS.
Step 2: Create your EKS cluster
In a real production situation you would want to configure your EKS cluster to meet your specific needs. In my case, I just want a basic cluster to experiment with, so I'm going to leave everything as default.
You can create your cluster with just one command (replace "dpm" with whatever you want to call your cluster and "ap-southeast-2" with whatever region you are working in):
eksctl create cluster --name dpm --region ap-southeast-2
NOTE: EKS has two different methods for running container workloads. The first is using managed EC2 which is similar to spinning up virtual machines to host your Kubernetes nodes. The other more recent approach is to use AWS Fargate which is a serverless compute platform. I thought that Fargate would be a nice simple option for I was trying to do, but I ran into two issues:
Because of this, I've used the managed EC2 approach. Despite the documentation mentioning that you need to manage your EC2 hosts yourself, for what we're trying to achieve here, we won't need to touch this configuration.
Run the (adjusted) command above to create your EKS cluster. It will take some time, maybe 10-15 minutes. Go make a cup of coffee while you wait or play with your dog(s) and or cat(s).
Running this command will also configure kubectl to point to this cluster.
领英推荐
Step 3: Deploy your workload to EKS
You define Kubernetes resources in a YAML file, also known as a Kubernetes manifest. I had already written a Kubernetes manifest to deploy my service locally, but I found it wouldn't deploy into EKS.
After following several tutorials online and referring to examples, I found the following Kubernetes manifest successfully deployed my app to EKS and made it accessible over the internet:
apiVersion: v1
kind: Service
metadata:
name: datapool-manager-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
namespace: dpm
labels:
app: datapool-manager
spec:
type: LoadBalancer
selector:
app: datapool-manager
ports:
- protocol: TCP
port: 80
targetPort: 9192
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: datapool-manager-deploy
namespace: dpm
labels:
app: datapool-manager
spec:
replicas: 1
selector:
matchLabels:
app: datapool-manager
template:
metadata:
labels:
app: datapool-manager
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
containers:
- name: datapool-manager-container
image: thekiwisre/datapool-manager:latest
ports:
- containerPort: 9192
There's a few points to mention here:
NOTE: I created my Docker container on an M1 chip Mac. I initially ran into issues because it creates an image not compatible with AMD64 architecture (the Kubernetes default). To ensure compatibility with Kubernetes I had to build my Docker image with the following command:
docker buildx build --push --platform=linux/amd64 -t thekiwisre/datapool-manager .
You'll notice in my manifest I specifically define that these resources will be deployed to a namespace called "dpm" (short for "Datapool Manager"). Let's create that namespace in our EKS cluster:
kubectl create namespace dpm
Then to deploy these two resources to the EKS cluster use the same command as you would to any other Kubernetes cluster:
kubectl apply -f ./kubernetes/dpm-manifest.yaml
Wait for your pod to be in the "Running" status using the command (you can add the -w argument if you want to watch/wait for changes in status):
kubectl get nodes -n dpm
Once it's running we can check if the app is accessible over the internet. Run the following command:
kubectl get services -n dpm
Copy the EXTERNAL-IP of the service:
Even after the service has deployed it may take 2-3 minutes or more for the app to be accessible over the internet, so be patient. Paste this into a browser with "https://" in front of it (or https if your service is over TLS/SSL) along with the path to your service. For example, my app was accessible at:
https://aff082e20c1a343888b3da8ac517c398-1084121075.ap-southeast-2.elb.amazonaws.com/DPM/STATUS
I can see my app is running and accessible over the internet:
That wasn't so hard after all!
Don't forget to delete the cluster when you're finished working (to avoid unnecessary charges on your AWS bill):
eksctl delete cluster --name dpm --region ap-southeast-2
Summary
There were really just two things I needed to do differently to deploy in EKS compares to working with my local Kubernetes cluster:
There's a lot of detail I'd like to explore later, such as how to configure my hosts and nodes, but that's a challenge for another day.