How to upload CV to EKS Cluster and add Observability features?
Image Courtesy @DigitalOcean Website

How to upload CV to EKS Cluster and add Observability features?

2 months ago, I came across a post from Forrest Brazeal about a Kubernetes challenge hosted by the "Cloud resume Challenge Community" about creating an HTML resume on Kubernetes. I was intrigued and decided to first learn Kubernetes and then give this a go. Link to the original post about this challenge here.

I have been hands-on with AWS cloud environments so this is my attempt to host a sample HTML resume on EKS service on AWS. I also wanted to check the process flow and track errors if any so I have also added observability features for my cluster hosted on AWS.

This article shows all the steps I took to first setup a Cloud9 environment, then create EKS cluster, create docker images and upload them and finally add observability features to my cluster.

Pre-requisites


  • An AWS Personal Account
  • Basic Working Knowledge of GIT, Cloud9 IDE, AWS Console
  • Sample Resume in HTML Format
  • Understanding of Docker commands tag image login push
  • Create an account on hub.docker.com
  • Creating EKS cluster is chargeable so deleting the environment post completion is required to avoid being charged.


Step A (Cloud9 IDE Setup)


  1. Open up the Cloud9 service on AWS Console.


Select Amazon Linux2 as the platform for the EC2 instance. Let all other settings as default.

Click on Create to create the environment. Wait for the EC2 environment to start running.

Check if there is an IAM role with Administrator Role. If not then create an IAM Role with administrator role.

Once the role exists, then attach the IAM Role with the EC2 instance associated with the Cloud9 environment.


Open the "Preferences" tab in Cloud9 console. Open the "AWS Settings" and see "AWS Managed Temporary Credentials" is "Off". IF it is ON, set it to OFF.

Step B (eksctl and kubectl Setup via CLI)


Go to the Cloud9 terminal and execute below commands in sequence.

  1. Ensure you are getting the IAM role that you have attached to Cloud9 IDE when you execute the below command

aws sts get-caller-identity --query Arn | grep eks-resume-admin -q && echo "IAM role valid" || echo "IAM role NOT valid"
        

2. Install eksctl.

 curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

sudo mv -v /tmp/eksctl /usr/local/bin        

3. Install kubectl

curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.28.5/2024-01-04/bin/linux/amd64/kubectl

chmod +x ./kubectl

mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH

echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc
        

4. Install jq, envsubst (from GNU gettext utilities) and bash-completion

sudo yum -y install jq gettext bash-completion        

5. Install yq for yaml processing

echo 'yq() {
  docker run --rm -i -v "${PWD}":/workdir mikefarah/yq "$@"
}' | tee -a ~/.bashrc && source ~/.bashrc        

6. Install c9 to open files in Cloud9

npm install -g c9        

7. Install k9s a Kubernetes CLI To Manage Your Clusters In Style!

curl -sS https://webinstall.dev/k9s | bash        

8. Verify the binaries are in the path and executable.

for command in kubectl jq envsubst aws
  do
    which $command &>/dev/null && echo "$command in path" || echo "$command NOT FOUND"
  done        

9. Enable kubectl bash_completion

kubectl completion bash >>  ~/.bash_completion
. /etc/profile.d/bash_completion.sh
. ~/.bash_completion        

10. Enable some kubernetes aliases

git clone --depth 1 https://github.com/junegunn/fzf.git ~/.fzf

~/.fzf/install --all

sudo curl https://raw.githubusercontent.com/blendle/kns/master/bin/kns -o /usr/local/bin/kns && sudo chmod +x $_

sudo curl https://raw.githubusercontent.com/blendle/kns/master/bin/ktx -o /usr/local/bin/ktx && sudo chmod +x $_

echo "alias kgn='kubectl get nodes -L beta.kubernetes.io/arch -L eks.amazonaws.com/capacityType -L beta.kubernetes.io/instance-type -L eks.amazonaws.com/nodegroup -L topology.kubernetes.io/zone -L karpenter.sh/provisioner-name -L karpenter.sh/capacity-type'" | tee -a ~/.bashrc

source ~/.bashrc        

11. Configure AWS CLI with your current region as default.

export ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)

TOKEN=$(curl -s -X PUT "https://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 60")

AWS_REGION=$(curl -s -H "X-aws-ec2-metadata-token: $TOKEN" -v https://169.254.169.254/latest/meta-data/placement/region 2> /dev/null)

export LAB_CLUSTER_ID=ekslab-resume        

12. Check if AWS_REGION is set to desired region

test -n "$AWS_REGION" && echo AWS_REGION is "$AWS_REGION" || echo AWS_REGION is not set        

13. Update bash_profile

echo "export ACCOUNT_ID=${ACCOUNT_ID}" | tee -a ~/.bash_profile
echo "export AWS_REGION=${AWS_REGION}" | tee -a ~/.bash_profile
echo "export LAB_CLUSTER_ID=ekslab-resume" | tee -a ~/.bash_profile
aws configure set default.region ${AWS_REGION}
aws configure get default.region        

14. Optional (Increase Disk Size on Cloud9 instance)

The below script at page provides a way to increase the disk size on Cloud9 instance. The script has to be triggered and provided a command line argument between 1-20 and it will update the disk size to that value.

Increasing the disk size is optional and this did not have any impact on the actual working of this scenario.

Script


#!/bin/bash

#!/bin/bash

# Specify the desired volume size in GiB as a command line argument. If not specified, default to 20 GiB.
SIZE=${1:-20}

# Get the ID of the environment host Amazon EC2 instance.
TOKEN=$(curl -s -X PUT "https://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 60")
INSTANCEID=$(curl -s -H "X-aws-ec2-metadata-token: $TOKEN" -v https://169.254.169.254/latest/meta-data/instance-id 2> /dev/null)
REGION=$(curl -s -H "X-aws-ec2-metadata-token: $TOKEN" -v https://169.254.169.254/latest/meta-data/placement/region 2> /dev/null)

# Get the ID of the Amazon EBS volume associated with the instance.
VOLUMEID=$(aws ec2 describe-instances \
  --instance-id $INSTANCEID \
  --query "Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId" \
  --output text \
  --region $REGION)

# Resize the EBS volume.
aws ec2 modify-volume --volume-id $VOLUMEID --size $SIZE

# Wait for the resize to finish.
while [ \
  "$(aws ec2 describe-volumes-modifications \
    --volume-id $VOLUMEID \
    --filters Name=modification-state,Values="optimizing","completed" \
    --query "length(VolumesModifications)"\
    --output text)" != "1" ]; do
sleep 1
done

# Check if we're on an NVMe filesystem
if [[ -e "/dev/xvda" && $(readlink -f /dev/xvda) = "/dev/xvda" ]]
then
# Rewrite the partition table so that the partition takes up all the space that it can.
  sudo growpart /dev/xvda 1
# Expand the size of the file system.
# Check if we're on AL2 or AL2023
  STR=$(cat /etc/os-release)
  SUBAL2="VERSION_ID=\"2\""
  SUBAL2023="VERSION_ID=\"2023\""
  if [[ "$STR" == *"$SUBAL2"* || "$STR" == *"$SUBAL2023"* ]]
  then
    sudo xfs_growfs -d /
  else
    sudo resize2fs /dev/xvda1
  fi

else
# Rewrite the partition table so that the partition takes up all the space that it can.
  sudo growpart /dev/nvme0n1 1

# Expand the size of the file system.
# Check if we're on AL2 or AL2023
  STR=$(cat /etc/os-release)
  SUBAL2="VERSION_ID=\"2\""
  SUBAL2023="VERSION_ID=\"2023\""
  if [[ "$STR" == *"$SUBAL2"* || "$STR" == *"$SUBAL2023"* ]]
  then
    sudo xfs_growfs -d /
  else
    sudo resize2fs /dev/nvme0n1p1
  fi
fi
        

Running the script.

chmod +x resize.sh 
./resize.sh 20        

Step C (Creating EKS Cluster in AWS)


  1. Now we will be creating an EKS cluster using cloud9 Terminal. This will take nearly 10 minutes to completely create and it will be chargeable. The syntax for AWS EKS documentation is here and I will be using below command to run at Cloud9 Terminal. Change the word "sks-eks-cluster" to something else if you want another name for your EKS cluster.

eksctl create-cluster --name sks-eks-cluster          

Response after nearly 10min

Step C (Uploading Files to EKS Cluster)


I have created a sample HTML resume template in my Github. So we will first clone that repository in Cloud9 and then using docker hub, create an image and then upload it to Docker registry. Then we will modify the loadbalancerservice.yaml file to reference the latest image and that will be shown once the load-balancer is up.

  1. run the below command to clone the repository

git clone https://github.com/KislayaSrivastava/resume-kubernetes.git        

cd to the folder resume-kubernetes

2. Use the docker build command to create an image

docker build -t my-eks-website .        


3. Run command docker images and then copy the image id of the my-eks-website image and store it in a notepad.

4. Login to the website hub.docker.com and create a new repository with the same name my-eks-website.

5. Now Go back to Cloud9, login to Dockerhub from CLI using below command. replace <dockerhubUsername> with your docker hub username

docker login --username=<dockerhubUsername>        

At the prompt provide the dockerhub password

6. Tag the container image using below command

docker tag <imageIDofmy-eks-website> <DockerhubUsername>/<DockerHubRepository>         

7. Push the container image by running below command

docker push <DockerhubUsername>/<DockerHubRepository>        

Checking the website the repository my-eks-website is tagged with the latest tag.

8. Open loadbalancerservice.yaml, and under containers, replace “image: httpd” with “image: <DockerhubUsername>/<DockerHubRepository>”

Now this file is used to create the load-balancers and the newly created load-balancers will reference the new image uploaded in the dockerhub.

Apply this manifest file by using below command

kubectl apply -f loadbalancerservice.yaml        

9. After about 1minutes the load-balancers will be created. Get the load-balancer URL by running the below command

kubectl get service        

10. Copy the load-balancer URL (Under EXTERNAL-IP column), and open it in a webpage. You will see the resume which is being served by the Kubernetes cluster.

A portion of my resume

STEP D (Observability Setup)


  1. Go to EKS Console and select your cluster.
  2. If you see this message on top, click “Create access entry”, and go through the steps, choose AmazonEKSadmin policy from drop down. Click Create.

3. Then click Add-Ons tab

4. Click “Get more add-ons”. Scroll down and select “Amazon CloudWatch Observability”

Scroll down further and click “Next”

  1. We need to create the IRSA for the Cloudwatch Agent
  2. Click “CloudWatch Observability add-on User Guide”. Run Step 2 in this page

You can get the cluster name by using the below command

eksctl get cluster         

Running the commands

Go back to EKS console and refresh icon (by the drop-down with “Not set”). Select the newly created role from drop down

Click next and click “Create”

It takes couple minutes to install the agent and then logs and metrics start flowing. Post the install the add-on starts showing in the add-on section of the EKS cluster.

Go to Cloud Watch Container Insights, and you should see the metrics from the cluster flowing in! Explore the metrics

Next, go to log groups and search with your cluster names and you should see logs flowing in as well!

Now opening up the application log group and re-fetching the resume via the endpoint i find one record showing stdout and the container details.

This shows that the records are being correctly fetched and being displayed.

STEP E ( CLEANUP)


The creation of EKS cluster is costly and to avoid any money being charged to you ensure that you delete the cluster by using below command

eksctl delete cluster --name=<NameOfYourCluster>        

Learning


Understanding the process flow was my biggest takeaway. The AWS and Kubernetes documentation is great. I was able to find solutions to my questions, learnt a lot by doing hands-on and was able to successfully complete this challenge.

References


要查看或添加评论,请登录

Kislaya Srivastava的更多文章

社区洞察

其他会员也浏览了