Day 9: Kubernetes Basics – Nodes, Pods, and Deployments
Shruthi Chikkela
Azure Cloud DevOps Engineer | Driving Innovation with Automation & Cloud | Kubernetes | Mentoring IT Professionals | Empowering Careers in Tech ??
Part of the #100DaysOfDevOps Challenge
Why Do We Need Kubernetes?
Imagine you’re running a containerised application with Docker. Everything works fine on your local machine, but what happens when you need to:
? Scale the application to handle millions of users?
? Ensure high availability if a container crashes?
? Deploy updates without downtime?
? Distribute workloads efficiently across multiple machines?
What is Kubernetes?
Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerised applications.
It was originally developed by Google and later open-sourced under the Cloud Native Computing Foundation (CNCF).
At its core, Kubernetes helps organisations manage their containerised applications across clusters of machines, ensuring high availability, scalability, and automation of workloads.
?? Key Features of Kubernetes:
? Automated Scaling – Adjusts application instances based on traffic or resource usage
? Self-Healing – Restarts failed containers and replaces unhealthy nodes
? Load Balancing & Service Discovery – Ensures even traffic distribution across pods
? Declarative Configuration & Automation – Uses YAML manifests to define application state
? Multi-Cloud & Hybrid Support – Works across on-premises, AWS, Azure, and GCP
Why Use Kubernetes? (Problems It Solves)
Before Kubernetes, organisations faced challenges in deploying and managing applications at scale. Kubernetes solves multiple real-world problems, including:
1. Manual Container Management Is Complex
Without Kubernetes, you must manually start, stop, and monitor individual containers, which becomes unmanageable in large-scale applications.
?? Without Kubernetes: You need to write custom scripts to manage containers
?? With Kubernetes: Automates container orchestration
2. Scaling Applications Efficiently
Applications need to handle varying traffic loads. Manually adding or removing instances is inefficient.
?? Without Kubernetes: Manually spin up new instances when traffic increases
?? With Kubernetes: Auto-scales based on demand (Horizontal Pod Autoscaler - HPA)
3. Service Discovery & Load Balancing
Microservices must communicate efficiently without hardcoding IP addresses.
?? Without Kubernetes: Need external tools like Nginx for load balancing
?? With Kubernetes: Built-in service discovery and load balancing
4. High Availability & Fault Tolerance
Applications should remain available even if nodes or containers fail.
?? Without Kubernetes: If a container crashes, the application goes down
?? With Kubernetes: Self-healing – Restarts failed containers automatically
5. CI/CD & Rolling Deployments
Deploying new versions of applications without downtime is challenging.
?? Without Kubernetes: Deployment needs to be manually handled
?? With Kubernetes: Supports Rolling Updates, Canary Deployments, and Blue-Green Deployments
Monolithic vs. Microservices vs. Containers vs. Kubernetes
?? Monolithic Architecture – Traditional model where all application components are tightly coupled in a single unit.
?? Microservices Architecture – Breaks the application into independent services that communicate via APIs.
?? Containers – Lightweight, isolated environments that package applications with dependencies.
?? Kubernetes – Manages and orchestrates multiple containers efficiently.
Kubernetes vs. Docker Swarm vs. OpenShift
There are multiple container orchestration tools, but Kubernetes has become the industry leader.
1. Kubernetes
? Open-source, widely adopted, and highly flexible
? Advanced features like auto-scaling, rolling updates, and self-healing
? Strong community support & cloud-native integrations (AWS, Azure, GCP)
? Steep learning curve due to its complexity
2. Docker Swarm
? Simple and lightweight compared to Kubernetes
? Tightly integrated with Docker
? Easier to set up for small-scale applications
? Lacks advanced features like auto-scaling, self-healing
3. OpenShift
? Enterprise Kubernetes with security & compliance features
? Built-in CI/CD tools like Tekton and ArgoCD
? Fully supported by Red Hat for enterprise workloads
? More restrictive compared to vanilla Kubernetes
Which One Should You Choose?
Setting Up Kubernetes (Minikube) & Deploying Your First Application
We will go step by step to:
? Install Minikube (a lightweight Kubernetes cluster)
? Deploy a simple application (Nginx)
? Expose it using a Service
Step 1: Install Minikube & kubectl
Minikube: A local Kubernetes cluster for testing
kubectl: Command-line tool to interact with Kubernetes
For Windows:
choco install minikube kubernetes-cli
For macOS:
brew install minikube kubectl
For Linux:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
Step 2: Start Kubernetes Cluster
minikube start --driver=docker
?? This creates a single-node Kubernetes cluster.
?? Check the status:
kubectl cluster-info
kubectl get nodes
Step 3: Deploy Your First Application (Nginx)
We will deploy an Nginx web server inside Kubernetes.
?? Create a Deployment:
kubectl create deployment nginx-app --image=nginx
?? Verify deployment:
kubectl get pods
?? Check logs:
kubectl logs -f <pod-name>
Step 4: Expose the Application
By default, pods are not accessible externally. We expose it using a Service.
kubectl expose deployment nginx-app --type=NodePort --port=80
?? Get service details:
kubectl get svc
?? Access the app:
minikube service nginx-app
Step 5: Scaling the Application
Let's scale our app to 3 replicas:
kubectl scale deployment nginx-app --replicas=3
?? Verify the scaling:
kubectl get pods
Step 6: Clean Up
To delete the deployment & service:
kubectl delete deployment nginx-app
kubectl delete service nginx-app
Core Kubernetes Architecture
Kubernetes is a distributed system with a master-worker architecture. It ensures scalability, resilience, and automation for containerized applications.
? Nodes – Master & Worker Nodes
? Kubernetes Control Plane Components – API Server, Scheduler, Controller Manager, etc.
? Worker Node Components – Kubelet, Kube-Proxy, Container Runtime
Nodes in Kubernetes
A Kubernetes cluster consists of two types of Nodes:
1?? Master Node – Manages the entire cluster
2?? Worker Nodes – Run the applications (containers)
?? Key Function:
The Master Node controls the cluster, while Worker Nodes execute application workloads.
1. Master Node (Control Plane)
The Master Node is responsible for cluster management and consists of:
API Server (kube-apiserver): Frontend for Kubernetes; processes all cluster requests
Scheduler (kube-scheduler): Assigns workloads (Pods) to Worker Nodes
Controller Manager (kube-controller-manager): Manages cluster state & self-healing
etcd: Distributed key-value store for cluster data
Example:
2. Worker Nodes
Worker Nodes run application workloads (containers).
Each Worker Node contains:
Kubelet: Talks to Master Node, ensures Pod health
Kube-Proxy: Manages networking between Pods
Container Runtime: Runs containers (Docker, containerd, CRI-O)
Example:
Kubernetes Control Plane Components:
The Control Plane runs on the Master Node and manages cluster operations.
1?? API Server (kube-apiserver)
? Central communication hub for all Kubernetes operations
? Validates and processes API requests
? Exposes Kubernetes REST API
?? Example:
2?? Scheduler (kube-scheduler)
? Assigns Pods to available Worker Nodes
? Considers CPU, memory, and node affinity
? Uses scheduling policies to ensure even workload distribution
?? Example:
3?? Controller Manager (kube-controller-manager)
? Ensures cluster self-healing and desired state
? Runs various controllers like:
?? Example:
4?? etcd (Cluster Database)
? Distributed key-value store that stores cluster state
? Ensures high availability using Raft consensus
?? Example:
Worker Node Components
Each Worker Node runs the necessary services to execute and manage Pods.
1?? Kubelet
? Agent that runs on every Worker Node
? Communicates with the API Server
? Ensures containers are running
?? Example:
2?? Kube-Proxy
? Manages networking and load balancing
? Routes traffic between Pods and Services
?? Example:
3?? Container Runtime
? Runs containers inside Pods
? Popular runtimes:
?? Example:
Kubernetes Architecture Diagram
+-------------------------------------------------+
| Kubernetes Cluster |
| +------------------------------------------+ |
| | Control Plane | |
| | - API Server | |
| | - Scheduler | |
| | - Controller Manager | |
| | - etcd | |
| +------------------------------------------+ |
| |
| +----------------------+ +----------------------+ |
| | Worker Node 1 | | Worker Node 2 | |
| | - Kubelet | | - Kubelet | |
| | - Kube-Proxy | | - Kube-Proxy | |
| | - Container Runtime | | - Container Runtime | |
| | - Runs Application | | - Runs Application | |
| +----------------------+ +----------------------+ |
+-------------------------------------------------+
?? Kubernetes Pods & Containers
What Are Pods?
A Pod is the smallest deployable unit in Kubernetes.
It represents one or more containers running together on the same Node.
?? Why use Pods instead of standalone containers?
? Shared networking – Containers in a Pod communicate via localhost
? Shared storage – Containers in a Pod can share volumes
? Simplified scaling – Pods can be easily scaled with ReplicaSets
?? Example: A Pod running a web application container along with a logging container.
Multi-Container Pods & Communication
Pods can contain one or more containers. Multi-container Pods are useful when containers need to: ? Share the same lifecycle (e.g., a web server + cache) ? Communicate locally (using localhost) ? Share storage volumes
1?? Single-Container Pod
The most common type of Pod runs a single container.
?? YAML Example:
apiVersion: v1
kind: Pod
metadata:
name: single-container-pod
spec:
containers:
- name: my-app
image: nginx
ports:
- containerPort: 80
?? Key points:
2?? Multi-Container Pod (Example: Web App + Logger)
Pods can run multiple containers that work together.
?? Example Use Case:
?? YAML Example:
apiVersion: v1
kind: Pod
metadata:
name: multi-container-pod
spec:
containers:
- name: web-app
image: nginx
ports:
- containerPort: 80
- name: logger
image: busybox
command: [ "sh", "-c", "while true; do echo logging; sleep 5; done" ]
?? Key points:
? Both containers share networking (localhost)
? The logger container logs requests
? Both containers share the same storage (if a volume is defined)
Pod Lifecycle
A Pod goes through several phases below during its lifecycle:
Pending: Pod is created but not yet running
Running: At least one container is running
Succeeded: All containers have exited successfully
Failed: A container has crashed or failed
Unknown: The state is unknown due to node issues
?? Check Pod status:
kubectl get pods
?? Describe Pod events:
kubectl describe pod <pod-name>
Init Containers
What Are Init Containers?
? Special containers that run before the main containers
? Used for setup tasks (e.g., downloading configs, waiting for dependencies)
? Exit before the main container starts
?? Use Cases:
?? YAML Example:
领英推荐
apiVersion: v1
kind: Pod
metadata:
name: init-container-pod
spec:
initContainers:
- name: init-setup
image: busybox
command: ["sh", "-c", "echo Initializing... && sleep 10"]
containers:
- name: main-app
image: nginx
?? Key points:
? The Init Container runs first and completes before main-app starts
? If the Init Container fails, the Pod won’t start
Sidecar Containers
What Are Sidecar Containers?
? Helper containers that run alongside the main container
? Used for logging, monitoring, or proxying requests
? Common in micro-services architectures
?? Use Cases:
?? YAML Example:
apiVersion: v1
kind: Pod
metadata:
name: sidecar-pod
spec:
containers:
- name: main-app
image: nginx
- name: log-collector
image: busybox
command: ["sh", "-c", "while true; do cat /var/log/nginx/access.log; sleep 5; done"]
?? Key points:
? The log-collector container reads logs from the main app
? Both containers share the same storage (log files)
Kubernetes Deployments & ReplicaSets
What is a Deployment?
A Deployment in Kubernetes is a higher-level abstraction that manages a set of Pods and ensures:
? Desired replica count is maintained
? Self-healing – If a Pod fails, a new one is created
? Rolling updates can be performed without downtime
? Rollbacks can be triggered if an update fails
Key Features of a Deployment:
? Ensures the right number of replicas are running
? Supports scaling up/down automatically
? Allows rolling updates without downtime
? Provides rollback functionality
?? Deployment YAML Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx:latest
ports:
- containerPort: 80
?? What this does:
? Creates a Deployment named my-app
? Runs 3 replicas of the nginx container
? Ensures automatic scaling and self-healing
ReplicaSet vs. ReplicationController
What is a ReplicaSet?
A ReplicaSet ensures a specified number of identical Pods are running.
? Replaces old ReplicationControllers
? Uses selectors to match running Pods
? Works with Deployments for rolling updates
?? ReplicaSet YAML Example:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replicaset
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx
?? Key points:
? Maintains 3 replicas of the nginx Pod
? Self-heals by replacing failed Pods
?? Best Practice: Use Deployments instead of managing ReplicaSets directly!
Rolling Updates vs. Recreate Strategy
When updating an application, Kubernetes offers two main strategies:
1?? Recreate Strategy (Downtime)
? Stops all old Pods before starting new ones
? Causes downtime while updating
? Suitable for stateful applications where running multiple versions is problematic
?? Example YAML (Recreate Strategy):
strategy:
type: Recreate
?? Drawback: Temporary downtime until new Pods are up!
2?? Rolling Update Strategy (Zero Downtime)
? Gradually replaces old Pods with new ones
? Ensures application remains available during the update
? Ideal for stateless micro-services
?? Example YAML (Rolling Update Strategy):
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
? maxSurge: 1 → Allows 1 extra Pod during the update
? maxUnavailable: 1 → At most 1 Pod can be unavailable
Rolling Update Commands:
1?? Check the current Deployment:
kubectl get deployments
2?? Update the Deployment:
kubectl set image deployment/my-app my-container=nginx:1.21
3?? Rollback if needed:
kubectl rollout undo deployment/my-app
? Zero downtime updates with rollback support!
Blue-Green & Canary Deployments
1?? Blue-Green Deployment
? Runs two environments – Blue (current) and Green (new)
? Switches traffic only after Green is stable
? Avoids partial failures during updates
?? Workflow:
? Step 1: Deploy the Blue (current version)
? Step 2: Deploy the Green (new version)
? Step 3: If Green works, switch traffic to it
? Step 4: Delete Blue after confirmation
?? Example: Using Kubernetes Services to switch traffic:
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: green-app # Change this from blue-app to green-app when ready
ports:
- protocol: TCP
port: 80
targetPort: 80
? Instant rollback → Just switch traffic back to Blue if needed!
2?? Canary Deployment
? Gradually releases new features to a subset of users
? Tests stability before full rollout ? Uses traffic splitting (e.g., 90% to old, 10% to new)
?? Example: Deploying 10% traffic to the new version
apiVersion: apps/v1
kind: Deployment
metadata:
name: canary-app
spec:
replicas: 1 # Canary starts with 1 Pod
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
version: canary
spec:
containers:
- name: my-container
image: nginx:latest
? If no issues, gradually increase traffic to canary version!
Understanding Kubernetes Networking
Kubernetes follows a flat networking model, meaning:
? Every Pod gets a unique IP address
? Pods can communicate with each other without NAT (Network Address Translation)
? Kubernetes manages internal networking using Services
However, since Pods are ephemeral (they can be deleted or recreated), direct communication via Pod IPs is unreliable.
Instead, Kubernetes uses Services to provide stable network endpoints.
Kubernetes Services – Types & Use Cases
A Service in Kubernetes is an abstraction that allows a group of Pods to be accessed reliably.
1?? ClusterIP (Default, Internal Communication Only)
? Exposes the service only within the cluster
? Can be accessed via https://<service-name>:<port>
? Used for internal microservices communication
?? Example YAML (ClusterIP Service)
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
? This service routes traffic to Pods running on port 8080
? Use Case: Internal communication between microservices
2?? NodePort (Exposes Service on Each Node’s IP & Port)
? Opens a static port on every worker Node
? Can be accessed via https://<node-ip>:<nodeport>
? Not recommended for public-facing applications
?? Example YAML (NodePort Service)
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
spec:
type: NodePort
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 30007 # Exposes service on Node IP:30007
? External access via: https://<Node-IP>:30007
? Use Case: Direct external access for testing purposes
3?? LoadBalancer (Exposes Service to the Internet via Cloud Load Balancer)
? Creates an external cloud load balancer ? Automatically assigns a public IP ? Ideal for production deployments in cloud environments
?? Example YAML (LoadBalancer Service)
apiVersion: v1
kind: Service
metadata:
name: my-loadbalancer-service
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
? Kubernetes provisions a public IP and forwards traffic
? Use Case: Exposing applications publicly on the internet
4?? ExternalName (Maps Service to an External DNS Name)
? Does not route traffic within the cluster
? Maps a Kubernetes Service to an external DNS name
? Useful for integrating with external databases, APIs
?? Example YAML (ExternalName Service)
apiVersion: v1
kind: Service
metadata:
name: my-external-service
spec:
type: ExternalName
externalName: example.com # Maps service to example.com
? Requests to my-external-service are forwarded to example.com
? Use Case: Accessing external services like databases or APIs
Ingress Controllers & Ingress Rules
A Service alone doesn’t provide advanced routing capabilities like:
? Path-based routing (e.g., /api → Backend, /app → Frontend)
? TLS termination (handling HTTPS traffic)
? Load balancing multiple services
For these features, Kubernetes uses Ingress Controllers & Ingress Rules.
1?? Ingress Controller
? Acts as an entry point for external HTTP/HTTPS traffic
? Can use NGINX, Traefik, HAProxy, AWS ALB, etc.
?? Example: Installing NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
? Deploys an NGINX-based Ingress Controller
2?? Ingress Rule – Path-Based Routing
? Defines rules for routing external traffic to internal services
?? Example YAML (Ingress Resource with Multiple Paths)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 80
- path: /app
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
? Requests to myapp.com/api → Routed to backend-service
? Requests to myapp.com/app → Routed to frontend-service
? Use Case: Advanced routing for microservices
DNS & Service Discovery in Kubernetes
Kubernetes provides built-in DNS for service discovery, so that:
? Services can be reached by name (https://my-service)
? No need to hardcode IP addresses
?? How It Works?
? Kubernetes assigns a DNS name to every Service
? Example: A Service named my-app in namespace default → Can be reached at:
my-app.default.svc.cluster.local
? Use Case: Allowing microservices to find each other without IP dependencies
?? Network Policies – Securing Communication
In Kubernetes, Pods are by default allowed to communicate with each other without any restrictions.
However, in many cases, security and compliance requirements demand that access between Pods be restricted, ensuring that only specific Pods can communicate with each other.
To enforce these restrictions, Kubernetes provides Network Policies.
What are Network Policies?
A Network Policy is a set of rules that control the ingress (incoming) and egress (outgoing) traffic for Pods in a Kubernetes cluster.
They allow you to define which Pods can communicate with each other, based on criteria such as Pod selectors, namespaces, IP blocks, and ports.
Why Use Network Policies?
Network Policies help in:
How Do Network Policies Work?
Network Policies are implemented by network plugins (like Calico, Cilium, or Weave), which enforce the rules defined in the policy.
The policies themselves don't define how traffic is routed; rather, they tell the network plugin which traffic should be allowed or denied.
Components of a Network Policy
Example of a Simple Network Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
spec:
podSelector:
matchLabels:
app: frontend
ingress:
- from:
- podSelector:
matchLabels:
app: backend
In this example:
Key Points to Remember
?? Layman’s Example:
Think of a Kubernetes cluster as a restaurant:
A Kubernetes cluster consists of two types of nodes:
?? Master Node (Control Plane) – The brain that decides what runs where.
?? Worker Nodes – The machines that actually run your applications.
?? Real-World Use Case:
Netflix runs thousands of Kubernetes nodes globally to serve millions of users seamlessly, ensuring zero downtime and dynamic scaling based on demand.
?? Hands-On: Deploying a Simple Nginx Web Server in Kubernetes
?? Step 1: Create a Kubernetes Deployment
Save the following YAML as nginx-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3 # Run 3 instances (pods)
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
?? Step 2: Apply the Deployment to Your Cluster
kubectl apply -f nginx-deployment.yaml
Kubernetes will now:
? Create 3 replicas (Pods) of an Nginx container.
? Distribute them across available worker nodes.
? Ensure they are always running (self-healing).
?? Step 3: Verify the Deployment
Check if the pods are running:
kubectl get pods
Check the deployment status:
kubectl get deployments
?? Kubernetes Best Practices for Deployments
? Use Labels & Selectors – Group related resources together.
? Set Resource Limits – Prevent one pod from consuming all system resources.
? Enable Rolling Updates – Deploy changes gradually to prevent downtime.
? Implement Auto-Scaling – Adjust replicas dynamically based on traffic.
? Use Readiness & Liveness Probes – Ensure only healthy pods receive traffic.
?? Real-World Industry Use Cases
?? How Companies Use Kubernetes in Production
?? Spotify – Uses Kubernetes for auto-scaling music recommendation services.
?? Tesla – Runs Kubernetes on bare-metal clusters for self-driving AI workloads.
?? Amazon Prime Video – Uses Kubernetes to deploy new features gradually without downtime.
?? What’s Next?
Day 10 – Kubernetes Services & ConfigMaps: Exposing Applications
Now that we’ve deployed applications, we need a way to access them and configure them dynamically!
In Day 10, we’ll cover:
?? What are Kubernetes Services? – Load Balancing & Networking
?? How ConfigMaps Help Manage Application Configurations
?? Step-by-Step: Exposing a Kubernetes Application to the Internet
?? Follow Shruthi Chikkela on LinkedIn for more DevOps insights!
?? Subscribe to my newsletter to stay ahead in your DevOps journey!
?? Cloud DevOps | ?? Azure | ??? Terraform | ?? Docker | ?? Kubernetes | ?? Infrastructure Automation Enthusiast | ?? Driving Scalability & Innovation
3 周I agree