Day 24: Scaling Applications with Kubernetes HPA and Cluster Autoscaler
Alex Parra
AWS Community Builder | Platform Engineer | Kubernetes | Gitops | DEVOPS | SRE
Welcome to Day 24 of the Zero to Platform Engineer in 30 Days challenge! ?? Today, we’re diving into autoscaling in Kubernetes, focusing on Horizontal Pod Autoscaler (HPA) and Cluster Autoscaler to efficiently manage workloads.
Why Autoscaling Matters in Kubernetes?
Autoscaling ensures that applications:
Two main types of autoscaling in Kubernetes:
Horizontal Pod Autoscaler (HPA)
Step 1: Enable Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Step 2: Deploy a Sample Application
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
requests:
cpu: "100m`
limits:
cpu: "200m"
kubectl apply -f deployment.yaml
Step 3: Create an HPA for the Deployment
kubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=1 --max=10
领英推荐
Step 4: Check HPA Status
kubectl get hpa
?? HPA will increase/decrease pods based on CPU usage automatically.
Cluster Autoscaler
Step 1: Deploy Cluster Autoscaler on AWS EKS
helm repo add autoscaler https://kubernetes.github.io/autoscaler
helm repo update
helm install cluster-autoscaler autoscaler/cluster-autoscaler \
--namespace kube-system \
--set autoDiscovery.clusterName=my-cluster \
--set awsRegion=us-west-2
Step 2: Verify Cluster Autoscaler
kubectl logs -f deployment/cluster-autoscaler -n kube-system
?? Cluster Autoscaler ensures that Kubernetes adds/removes nodes as needed.
Activity for Today
What’s Next?
Tomorrow, we’ll explore safe deployments using Canary Releases and Feature Flags.
?? Check it out here: Zero to Platform Engineer Repository
Feel free to clone the repo, experiment with the code, and even contribute if you’d like! ??