Kubernetes Basic & Advanced Deployment Strategies With Code Examples and Use Case Scenarios
Image source: Google images

Kubernetes Basic & Advanced Deployment Strategies With Code Examples and Use Case Scenarios

Kubernetes is a hot topic of today, companies, be it small, medium or big companies are using or considering to use Kubernetes and working towards moving their workload to Kubernetes cluster.

While even before going to deployment strategies, you first need to know about:

What is Kubernetes?

How entire Kubernetes system works?

How to create a Kubernetes Cluster? (Try the interactive tutorial)

How to make Docker image from our project first so that you can use the same image in the Kubernetes cluster to run the application?

How Kubernetes Deployment Works?

Which Kubernetes apiVersion Should I Use?

Best courses to learn above:

Free learning resources:

Kubernetes:

https://linuxacademy.com/course/kubernetes-essentials/

https://linuxacademy.com/course/kubernetes-the-hard-way/

https://www.youtube.com/watch?v=0KQndcIedeg

https://www.youtube.com/watch?v=Mi3Lx7yk3Hg

Docker:

https://www.youtube.com/watch?v=zJ6WbK9zFpI


Paid:

https://www.udemy.com/course/docker-mastery/

https://www.udemy.com/course/docker-and-kubernetes-the-complete-guide/

https://linuxacademy.com/course/docker-deep-dive-part-1/


After all the above comes what will be your deployment strategy, there are multiple deployment strategies available just like everyone knows "not one size fits all".

So each application types have their own requirements, budgetary constraints, availability requirements, available resources, and many other deciding factors. While discussing each deployment methods we will be reading about all those use cases, so based on your application need, you can decide what deployment is best for your application needs.

Let's get started:

Basic Deployment Strategies:

Recreate:

This feature will kill all the existing pods and then bring up the new ones. This results in quick deployment however it will result in downtime when the old pods are down and the new pods have not come up.

Code Sample using Nginx example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  strategy:
      type: Recreate
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80


Use case: This is for the development environment where you want to make deploy changes as quickly as possible so you can test your changes. Not suitable for production application as that will create downtime / temporary service unavailable kind of situation.


Rolling Update:

The Deployment updates Pods in a rolling update fashion when .spec.strategy.type==RollingUpdate. You can specify maxUnavailable and maxSurge to control the rolling update process.

Max Unavailable

.spec.strategy.rollingUpdate.maxUnavailable is an optional field that specifies the maximum number of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage by rounding down. The value cannot be 0 if .spec.strategy.rollingUpdate.maxSurge is 0. The default value is 25%.

For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of the desired Pods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available at all times during the update is at least 70% of the desired Pods.

Max Surge

.spec.strategy.rollingUpdate.maxSurge is an optional field that specifies the maximum number of Pods that can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). The value cannot be 0 if MaxUnavailable is 0. The absolute number is calculated from the percentage by rounding up. The default value is 25%.

For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new Pods does not exceed 130% of desired Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the total number of Pods running at any time during the update is at most 130% of desired Pods.

Code Sample using Nginx example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  strategy:

      type: RollingUpdate
      rollingUpdate:
           maxSurge: 1 # how many pods we can add at a time 
           
           maxUnavailable: 0 #how many pods can stay unavailable   
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80


Use case:

The new version is slowly released so the impact is very less or equal to none of for production-level deployments.

It could be ideal for servers handling WebSocket connections or MongoDB / Redis etc servers, e.g. if all pods suddenly stop and when new pods started, it will overload the new pods since all clients will try to connect the available pods at once and all resources allocated to new pod will get full and also there will be downtime due to this and if rolling deployment used for such servers, then using above strategy one new version pod will be created even before 1 old version pod go down, so when old pods go down slowly, it the disconnected socket clients will connect with new pods and there will be no downtime or inconvenience for normal users.

Advanced Deployment Strategies

Blue/Green Deployment:

Blue/green deployment quoted from TechTarget

A blue/green deployment is a change management strategy for releasing software code. Blue/green deployments, which may also be referred to as A/B deployments require two identical hardware environments that are configured exactly the same way. While one environment is active and serving end users, the other environment remains idle.


No alt text provided for this image


Check this awesome article "Zero-downtime Deployment in Kubernetes with Jenkins"

Code Sample using Nginx example:

e.g. Version 1 serving live users

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
        version: v1.0.0
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
        
        readinessProbe:
         httpGet:  
           path: /  
           port: 80


e.g. Version 2 which might contain all new changes

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
        version: v2.0.0
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
        
        readinessProbe:
         httpGet:  
           path: /  
           port: 80

And you have a front-facing production service (e.g. suppose service 1 ) which is currently pointing workload to v1.0.0 (Blue) and might be another service (suppose service 2) pointing to v2.0.0 (Green) , then test changes in v2.0.0 (Green) and if all looks good, then just change the workload of service 1 to point v2.0.0 (Green) deployments, it's all you have to do and when you see after traffic coming to v2.0.0 are working fine as well, then you can shutdown v1.0.0 deployment (and it's pods)

So here you will be running two deployments separately (mind that it will cost you extra until you stop old version deployment ).

here is an example of how service 1 might look like: (i.e. suppose currently serving the normal users)

apiVersion: v1
kind: Service
metadata:
  name: app-a
spec:
  type: LoadBalancer
  selector:
    app: nginx
    version: v1.0.0
  ports:
	  - name: http
	    port: 80
	    targetPort: 80
    
    


and service 2 might look like:

apiVersion: v1
kind: Service
metadata:
  name: app-b
spec:
  type: LoadBalancer
  selector:
    app: nginx
    version: v2.0.0
  ports:
	  - name: http
	    port: 80
	    targetPort: http
    
    


after testing, if all looks good, then change service 1 's selector version to v2.0.0 as below:

apiVersion: v1
kind: Service
metadata:
  name: app-a
spec:
  type: LoadBalancer
  selector:
    app: nginx
    version: v2.0.0
  ports:
	  - name: http
	    port: 80
	    targetPort: http

After applying the above service if all looks good and all doing good, you can remove v1.0.0 deployment.

Use case: If you don't want unexpected situation while deploying new changes in production then this might be the right choice, where you will make a new version of deployment and test the new version of deployment and after satisfied with the testing move the traffic from v1 deployment to v2 deployment and so on. In this way, you will have confidence before making changes live. But remember it will increase the cost a bit based on how many replicas you additionally use for new version deployment and testing before making it live.

Canary Deployment:

A canary deployment consists of gradually shifting production traffic from version A to version B. Usually the traffic is split based on weight. For example, 90 percent of the requests go to version A, 10 percent goes to version B.

You can apply the canary deployment technique using the native way by adjusting the number of replicas or if you use Nginx as an Ingress controller you can define fine-grained traffic splitting via Ingress annotations.

Read this article to know more: https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments

Another famous way to achieve canary deployment is Istio, here is one good article with all code example:

https://istio.io/blog/2017/0.1-canary/

Additionally, check this article where diagrammatically explained well along with other deployment methods too explained:

https://dzone.com/articles/kubernetes-deployment-strategies


Use case:

This technique is mostly used when the tests are lacking or not reliable or if there is little confidence in the stability of the new release on the platform.

This way you are releasing changes to a subset of users, so that if any issue you can rollback deployment very easily and quickly, also features get tested by your actual users.

I have seen this kind of deployment getting used in many live production level application and did similar whenever is required.

Here remember, that measure the risk factor before making the changes available to live users, like 5% of load or 10% load or what exactly percentage of load you are going to forward to new deployment, the decision is different based on nature of the application and what users it's serving.

When fully confident then move all traffic to new deployment and delete old deployment.

Check below documentation too using istio, as it's very well explained with code examples.

https://github.com/ContainerSolutions/k8s-deployment-strategies/tree/master/canary/istio

For native:

https://github.com/ContainerSolutions/k8s-deployment-strategies/tree/master/canary/native

For Nginx:

https://github.com/ContainerSolutions/k8s-deployment-strategies/tree/master/canary/nginx-ingress

You might think it's something similar to blue/green deployment, but think carefully, in blue/green deployment you are not gradually moving the traffic, instead, you are doing it once but in Canary deployment, you are doing it very slowly and observing the impact on users, error rates, if any serious issue then you are very quickly and efficiently rolling back so it impacts no users anymore so slowly and steadily.


A/B testing or Dark Deployment:

A/B testing deployments consist of routing a subset of users to a new functionality under specific conditions. It is usually a technique for making business decisions based on statistics rather than a deployment strategy. However, it is related and can be implemented by adding extra functionality to a canary deployment.

This technique is widely used to test conversion of a given feature and only roll-out the version that converts the most.

Here is a list of conditions that can be used to distribute traffic amongst the versions:

  • Weight
  • Cookie value
  • Query parameters
  • Geolocalisation
  • Technology support: browser version, screen size, operating system, etc.
  • Language

This also called dark deployment because a new feature is not actually released to all users, but released to a particular set of users based on the condition as stated above. So users don't actually know they are used as testers to test particular new features hence they are staying in "dark" :)

Check this article were explained with a diagram (I like the diagrams in this article very much):

https://dzone.com/articles/kubernetes-deployment-strategies

For code examples check this repo/documentation:

https://github.com/ContainerSolutions/k8s-deployment-strategies/tree/master/ab-testing

Use case: as the name or definition suggests, before deploying a change or feature to all users, it released to a particular subset of users based on many available conditions, as per the app/feature requirements the condition can be selected, and a particular subset of users will be using the new set of features or changes, it may look like canary deployment but think in canary you are mainly adding weight between traffic allocation but in this deployment strategy you would be handing on front-end requirements and so a particular feature will be tested by actual users and all users will be not affected, not only that, if any bug reported, that will be easily traceable, you can conder a/b testing strategy as an addon to canary deployment.

Shado Deployment:

A shadow deployment consists of releasing version B alongside version A, fork version A’s incoming requests and send them to version B as well without impacting production traffic. This is particularly useful to test the production load on a new feature. A rollout of the application is triggered when stability and performance meet the requirements.

In this example, we make use of Istio to mirror traffic to the secondary deployment.

All code examples are here:

https://github.com/ContainerSolutions/k8s-deployment-strategies/tree/master/shadow

Use case:

This technique is fairly complex to set up and needs special requirements, especially with egress traffic. For example, given a shopping cart platform, if you want to shadow test the payment service you can end-up having customers paying twice for their order. In this case, you can solve it by creating a mocking service that replicates the response from the provider.


Summary:

Since you have read all above, so you know what kind of deployment method available for your Kubernetes cluster, and believe me, not all strategies work for all as you might read use case scenarios.

Based on the application, business logic, development flow, features, budgetary constraints, and maybe a lot more to consider before finalizing the deployment strategy.

If you know better documentation, resources, courses about Kubernetes, Docker, different deployment strategies, feel free to mention those in comments.


Thanks for reading, if you like it, share it with others.

Lalit Khera

Apache Kafka || Cloud Architect | Experienced Full-Stack Developer | DevOps Specialist | AWS, Azure, GCP | Java, Python | CI/CD, Infrastructure Automation, Cost Optimization

1 年

Kubernetes basics are the building blocks of a solid cloud-native infrastructure. This post simplifies it nicely for beginners Sandip Das

回复

Thanks Sandip. Nice one.

Trung Phan

Developer at Axon Active

5 年

Thank you so much.

要查看或添加评论,请登录

Sandip Das的更多文章

社区洞察

其他会员也浏览了