DevOps for Backend Developers - Part 1: Kubernetes
Hamza Idrees
AWS Certified Senior Consultant @ Saudi Investment Bank | Java | Spring Boot | Data & Generative AI | Backend Development & DevOps | Microservices | Healthcare | Fintech | MongoDB | Spring Security | Kafka | CI/CD Docker
In this article, we will explore Kubernetes, the famous container orchestration tool, and a quick demo of deploying containers on a Kubernetes cluster. If you are here, you already have a high-level idea of what containers i.e. docker and orchestration tools i.e. Kubernetes mean, so we will skip ahead to the practical stuff.
We will follow the following structure in the training:
Setting up the environment:
To locally simulate Kubernetes, we will need a local Kubernetes environment. For this, we will use minikube. Head over to the Minikube website and follow the instructions to install it.
Run the following to start Kubernetes and test that the environment is up and running:
minikube start
kubectl get pods
Once done, let's move on to practicing Kubernetes.
Assumptions:
You are already familiar with the docker
You have minikube installed, up and running
Let's dive into Kubernetes.
Pods:
A pod is the smallest unit of Kubernetes deployment and consists of containers. A Pod may have one or multiple containers, managed together by the Pod. For example, a microservice can be containerized and deployed on Kubernetes as a Pod.
Creating a Pod deployment:
Deployment of a pod consists of writing a deployment description file.
Let's create a file called pod-deployment.yaml. The file will have 4 primary components:
Let's create pod-deployment.yaml using nginx container:
apiVersion: v1
kind: Pod #because we are creating a Pod - case sensitive
metadata:
name: be-app #name of the pod as a backend app
labels:
app: be-app #this is used at advance stages with selectors. We will get to that
tier: BE #optional - categories pods by tiers i.e. backend,frontend, database,etc
type: backend #optional - assign a type
spec:
containers:
- name: be-auth-container #name of container inside the pod
image: nginx #image of container
Let's deploy this on minikube. Run the following command:
kubectl create -f pod-deployment.yaml
Now let's check the deployment status by following:
kubectl get pods
First, you will see the container being deployed, hence the 0/1, and when successfully deployed, 1/1 with status running. So there, we just deployed our first Pod. Hooray!
Now you may face some issues while you deploy your first Pod such as the Pod will not start up if properties are incorrect. To check the issues, run
kubectl describe pod <pod-name>
This will show logs and highlight issues. Let's change the image from nginx to nginx123.
apiVersion: v1
kind: Pod #because we are creating a Pod
metadata:
name: be-app #name of the pod as a backend app
labels:
app: be-app #this is used at advance stages with selectors. We will get to that
tier: BE #optional - categories pods by tiers i.e. backend,frontend, database,etc
type: backend #optional - assign a type
spec:
containers:
- name: be-auth-container #name of container inside the pod
image: nginx123 #image of container
let's run it.
Notice that the status is error and ready is 0/1. Now let's see what went wrong:
kubectl describe pod be-app
which gives us:
D:\freelance\Devops> kubectl describe pod be-app
Name: be-app
Namespace: default
Priority: 0
Service Account: default
Node: minikube/192.168.59.100
Start Time: Sat, 20 Jul 2024 12:02:36 +0500
Labels: app=be-app
tier=BE
type=backend
Annotations: <none>
Status: Pending
IP: 10.244.0.10
IPs:
IP: 10.244.0.10
Containers:
be-auth-container:
Container ID:
Image: nginx123
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 78s default-scheduler Successfully assigned default/be-app to minikube
Normal SandboxChanged 74s kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulling 34s (x3 over 77s) kubelet Pulling image "nginx123"
Warning Failed 31s (x3 over 75s) kubelet Failed to pull image "nginx123": Error response from daemon: pull access denied for nginx123, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Warning Failed 31s (x3 over 75s) kubelet Error: ErrImagePull
Normal BackOff 6s (x5 over 73s) kubelet Back-off pulling image "nginx123"
Warning Failed 6s (x5 over 73s) kubelet Error: ImagePullBackOff
In the logs above, notice that in the containers tab, we see the image we provided - nginx123. And at the end, we see the logs that Kubernetes tried to look for an image nginx123 and failed.
This way, using the describe command, we determine the failure reason and fix it.
Let's move on.
Creating a ReplicaSet:
In production, we do not have one Pod but multiple copies of containers running in pods on multiple Kubernetes nodes. To handle this, Kubernetes supports the creation of ReplicaSet where we specify the number of Pods we require for a container.
The benefit of it is that if any pod goes down due to any reason, the RS will spin up a new Pod in its place and take control of it. So it gives a way to seamlessly create and maintain the desired number of replicas for scalability.
Here's how to create a ReplicaSet:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: be-app-replicaset
labels:
app: be-replicaset
type: back-end
spec:
template:
#copy the pod definition file from metadata onwards and paste here
replicas: 3 #create 3 copies of pods
selector: #control the count of those Pods which have the following labels
matchLabels:
#copy the labels from the Pod definition file
so after copying from the Pod, it will look like this:
apiVersion: apps/v1 #note that the version is apps/version
kind: ReplicaSet
metadata:
name: be-app-replicaset
labels:
app: be-replicaset
type: back-end
spec:
template:
metadata:
name: be-app #name of the pod as a backend app
labels:
app: be-app #this is used at advance stages with selectors. We will get to that
tier: BE #optional - categories pods by tiers i.e. backend,frontend, database,etc
type: backend #optional - assign a type
spec:
containers:
- name: be-auth-container #name of container inside the pod
image: nginx #image of container
replicas: 3 #the number of pods required
selector:
matchLabels:
app: be-app #copied from our previous Pod template
tier: BE
type: backend
Now, let's create a ReplicaSet by following:
kubectl create -f replicaset-deployment.yml
This deployment will create a Replicaset and corresponding pods as described (3).
Notice that the ReplicaSet has been created and it has further created 3 Pods. It will control the Pods on the basis of selector. Which means, if there was already a pod running with the same label of be-app, only 2 new pods will be created and the ReplicaSet will take control of it.
Let's now first create a pod, and then create our ReplicaSet:
领英推荐
Notice that we first created a pod, then a replicaset. When we list all resources, our first pod was created 24 seconds ago, then replicaset created only 2 more pods and took control of the existing pods that match the given labels in the selector.
Modifying the number of Replicas:
To change the number of replicas/pods, we can either edit the deployment file or simply change it in the runtime (this change will be lost on new deployment).
There are two ways to do runtime change:
kubectl edit replicaset be-app-replicaset
2. Update the configuration by command:
kubectl scale replicaset be-app-replicaset --replicas=6
This way, we can scale the pods up and down.
Deployments:
Another important concept in Kubernetes is deployments. In a practical scenario, we will be implementing CI/CD and will be required to push updates to production. Kubernetes gives us two approaches to push updates:
Creating a deployment - Rollout:
A deployment file looks similar to ReplicaSet with kind Deployment and strategy i.e. rollout or recreate.
apiVersion: apps/v1
kind: Deployment #notice the kind
metadata:
name: be-deployment
namespace: default
spec:
replicas: 4
selector:
matchLabels:
name: be-app
strategy: #strategy can be RollingUpdate or Recreate
type: RollingUpdate
rollingUpdate:
maxSurge: 25% #define % of pods being created
maxUnavailable: 25% #how many pods to take down in steps
#OR
# type: Recreate #drop all pods and create again (downtime)
template:
metadata:
labels:
name: be-app
spec:
containers:
- name: be-app-container
image: nginx
ports: #define ports so applications can talk to our pods
- containerPort: 8080
protocol: TCP
Let's run this:
kubectl create -f .\deployment\deployment-definition.yml --record
#record will be used to store the reason of deployment change we will see nect
if everything goes well, it will look like this:
Here, we have created a deployment, which created a replicaset and that replicaset is creating and managing the pods instead.
Let's get to the fun part, making some updates and rolling them out using the two available strategies. A typical usecase will be to use latest version of your image but for the sake of learning, let's try changing the image in our container from nginx to busybox.
Keeping everything else same, let's modify the spec section inside template:
spec:
containers:
- name: be-app-container
image: busybox #instead of nginx
command: ["sh", "-c", "sleep 3600"] #run this command when busybox starts
ports:
- containerPort: 8080
protocol: TCP
Now, let's deploy it:
kubectl apply -f .\deployment\deployment-definition.yml
Now let's see the resources:
Notice that we have two different replicasets here. The previous one has now 0 pods and the new one has 4 pods. That means, we replaced our old deployment with newer one, but also kept the configuration of older replicaset in case we need to roll back.
First, let's see how the pods were created by describing the deployment:
kubectl describe deployment be-deployment
Notice that the replicasets arebeing torn down one by one (25% of total capacity as defined) and making sure that application is still available. If the new pods fail to start, the deployment will be unsuccessful and our older application will still work.
Creating a deployment - Recreate:
Now, let's deploy with the strategy Recreate. To do it, simply edit the following part of yaml:
strategy:
type: Recreate
and now let's patch it and then describe the deployment
Notice that in this case, the pods were directly scaled down from 4 to 0. And then newer deployment took over and took it back to 4. Whilethis happens, the application would experience a downtime as well.
Rollback Deployment:
Previously, we saw the two strategies to upgrade a deployment. What if we faced an error with the new deploymentand want to rollback to a previous deployment? Kubernetes is here to help.
First, let's check the history of our deployments:
kubectl rollout history deployment be-deployment
This gives us the history of deployments and the reason for change with commands:
From here, we can rollback to a previous deployment:
Kubectl rollout undo deployment be-deployment
Now let's check the rollout history again:
Notice that the revision 1 has disappeared because after the rollback, 1 has become 3 i.e. the latest deployment.
Summary:
In the above demonstration, we looked at the type of kubernetes deployments (Pod, ReplicaSet, Deployment) and the respective strategies of making a deployment (Rollout, Recreate) as well as rollback to previous deployments. This knowledge is essential to understand the basics of kubernetes and successfully deploy docker containers in an already set-up environment. In part 2, we will take a look at how to setup a kubernetes cluster and their network configurations.
Till then, happy coding!
Software Development Engineer | Python, React, SQL | I've developed 20+ websites used by 100K+ people worldwide.
4 个月Why did the DevOps engineer go broke? Because he kept pushing all his changes to the bank's master branch! Happy DevOpsing!
Sr. Talent Analyst - EMEA | Building a team of 4 .Net FullStack Developer (5-9 years) for Gurgaon, Haryana. Apply Now - [email protected]
4 个月Interesting!