AUTOMATED WEB APP DEPLOYMENT USING KUBERNETES

AUTOMATED WEB APP DEPLOYMENT USING KUBERNETES

ARCHITECTURE

No alt text provided for this image

This article helps you to automate Web deployment with Kubernetes. Generally, it is not an easy task to fetch the code of a developer from the Jenkins and make Operating System according to his/her need and upload it on the docker hub and from there we can pull the image to deploy it on Kubernetes and Kubernetes will launch the pod. Isn't it a very long and tired process to do it again and again ??

We always need to update the things frequently once its made right. Today in this article I am going to explain how to automate this long process.

We are going to see once the developer upload his/her dockerfile and webpages onto the GitHub, Jenkins will automatically pull the file from git hub to build the docker image. It will automatically upload it to the docker hub. After this Jenkins will run one more job to pull the image from the docker hub and create a deployment on Kubernetes cluster with Auto scaling feature. So if the traffic increases, pod will increase. If traffic reduces the pod will close automatically and it will also expose the service so that we can access our website. These all tedious process will be automated using the job chain in Jenkins.

The code of the docker file can be found on my github account which is attached at the last of this article.

JOB1- It will pull files from GitHub and build an image from Dockerfile and push it to the docker hub.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

This job is done manually if we dont use jenkins for automating.

No alt text provided for this image

If we use jenkins the job will be automated. It can be automated as shown below.

No alt text provided for this image


No alt text provided for this image

So as of now job1 is been completed.Our image is been build and uploaded by Jenkins automatically.

JOB2

Lets get into job2.

If the deployment is created for the first time it will be successfully created and do auto-scaling according to the CPU load and expose the deployment.

But if the image is uploaded and job2 is runned again it will roll the update in the same deployment and will upload the content.

No alt text provided for this image
No alt text provided for this image

When the pod is not ready it looks like the image that's shown below.

No alt text provided for this image

When the pod not ready it looks like the image that's shown below.

No alt text provided for this image

This is the view when its run for the first time as we can see in IMAGES.

No alt text provided for this image
No alt text provided for this image

The IP which we get is local IP. It means who are connected to our network only, they can access but we want to expose this to the rest of the world so that they can also access it.

First of all download ngrok according to the Operating System and unzip it. Add the path of it into the system.

No alt text provided for this image

Replace IP with your IP and port with your respective port number.

In my case it looke like this.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Conclusion

Today we have created a beautiful pipeline to automate very hard and big things of deployment of web application using Kubernetes. We have created the deployment with auto-scaling feature enabling in Kubernetes so that if the traffic increases pods will scale up and the if traffic is low pod will scale down using job chain in Jenkins automatically.

GITHUB LINK FOR DOCKERFILE

I wanna thank Mr ADITYA GUPTA for helping me to complete this task.

Thank You guys for reading this article..

要查看或添加评论,请登录

Nikhil G R的更多文章

  • Introduction to DBT (Data Build Tool)

    Introduction to DBT (Data Build Tool)

    dbt is an open-source command-line tool that enables data engineers and analysts to transform data in their warehouse…

  • DIFFERENCES IN SQL

    DIFFERENCES IN SQL

    WHERE vs HAVING WHERE and HAVING clauses are both used in SQL to filter data. WHERE WHERE clause should be used before…

  • Introduction to Azure Databricks (Part 2)

    Introduction to Azure Databricks (Part 2)

    DBFS (Databricks File System) It is a Distributed File System. It is mounted into a databricks workspace.

  • Introduction to Azure Databricks (Part 1)

    Introduction to Azure Databricks (Part 1)

    Databricks is a company created by the creators of Apache Spark. It is an Apache Spark based unified analytics platform…

  • Aggregate and Window Functions in Pyspark

    Aggregate and Window Functions in Pyspark

    Aggregate Functions These are the functions where the number of output rows will always be less than the number of…

  • Different ways of creating a Dataframe in Pyspark

    Different ways of creating a Dataframe in Pyspark

    Using spark.read Using spark.

  • Dataframes and Spark SQL Table

    Dataframes and Spark SQL Table

    Dataframes These are in the form of RDDs with some structure/schema which is not persistent as it is available only in…

  • Dataframe Reader API

    Dataframe Reader API

    We can read the different format of files using the Dataframe Reader API. Standard way to create a Dataframe Instead of…

  • repartition vs coalesce in pyspark

    repartition vs coalesce in pyspark

    repartition There can be a case if we need to increase or decrease partitions to get more parallesism. repartition can…

    2 条评论
  • Apache Spark on YARN Architecture

    Apache Spark on YARN Architecture

    Before going through the Spark architecture, let us understand the Hadoop ecosystem. The core components of Hadoop are…

社区洞察

其他会员也浏览了