CI/CD pipeline for deployment to Kubernetes cluster

CI/CD pipeline for deployment to Kubernetes cluster

Learning new tools is challenging. The learning becomes even more challenging when the tools are to be used together in the workflows and hence require the perfect setup. One such challenge could be in setting up a CI/CD pipeline for a web application. This article is a walk through on setting up a CI/CD pipeline for a simple NodeJs application on a Kubernetes cluster. Some experience and basic knowledge about Docker, Kubernetes and Jenkins is required for a smooth understanding and follow along.

CI/CD and it's benefits

CI/CD i.e. Continuous Integration Continuous Deployment pipelines help in speeding up the development of applications, testing of code and deployment operations on web servers by automating the whole process. A lot of companies such as Amazon, Netflix etc. use such a workflow to send out bug fixes, patches and new features quickly and frequently.

Toolkit

GitHub - For source control.

Jenkins - To orchestrate the whole pipeline.

Docker & Docker Hub - To build application Image and store the Image.

Kubernetes - For deploying and managing the web servers.

AWS - Infrastructure to deploy on a public web server along with a loadbalancer.

Walk through

GitHub

Save all the application related files on a public GitHub repository. All the other necessary files such as the Dockerfile to build the docker image, pods.yaml to deploy pods, services.yaml to create a loadbalancer should be present here.

Contents of the Dockerfile which I used -

FROM node:carbon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]

Contents of the pods.yaml file which I used -

apiVersion: v1
kind: Pod
metadata:
  name: nodeapp
  labels:
    app: nodeapp
spec:
  containers:
    - name: nodeapp
      image: shishirkhandelwal/nodeappcicd:tagVersion
      ports:
	    - containerPort: 8080

Contents of the services.yml file which I used -

kind: Service
apiVersion: v1
metadata:
  name: nodeapp
spec:
  selector:
    app: nodeapp
  ports:
  - protocol: TCP
       port : 80
       targetPort : 8080
    type : LoadBalancer

Jenkins

Approach is to use Jenkins Pipeline to create a workflow. To do so, Docker Hub credentials are added. Credentials to SSH into Kubernetes cluster master node are also added. The credentials were added on the Jenkins global credentials section. We will first build our application and then push the Image to Docker Hub (That is why, Jenkins needs the docker hub credentials) After this, we deploy the new Image from the docker hub to the Kubernetes cluster. Deployments to Kubernetes clusters are carried out through the cluster's master node. So we will SSH into the master node and deploy the application from there using the kubectl apply commands. (That is why, Jenkins needs the master node SSH credentials)

The tricky part

What makes the task a bit difficult is to create a way so that Kubernetes always deploys the newest image from the docker hub. ( This is the part where I spent most of my time stuck while trying to create the pipeline) Try to think of a solution!

Solution

After alot of hit and trials through tutorials on various blogs. I finally understood what can be done! The way to go about the problem is to somehow dynamically create new tags of the docker image when pushing to docker hub. And change the pods.yml file's Container Image section with the updated image tag. In this way, when we run Kubectl apply command - the latest image will be pulled and deployed. This is what makes the pipeline truly automated.

A script to change the contents of the pods.yml with the new image tags.

#!/bin/bash

sed "s/tagVersion/$1/g" pods.yml > node-app-pod.yml

Creating a new tag dynamically

Whenever we do a new commit on a GitHub repository - A unique commit ID gets associated to the repository. This commit ID can be referenced inside Jenkins and be used as a tag for our Docker Image!

The Jenkins pipeline code

 
	pipeline {
	    agent any
	    environment{
	        DOCKER_TAG = getDockerTag()
	    }
	    stages{
	        stage('Build Docker Image'){
	            steps{
	                sh "docker build . -t shishirkhandelwal/nodeappcicd:${DOCKER_TAG}"
	            }
	        }
	        stage('Dockerhub push'){
	            steps{
	                withCredentials([string(credentialsId: 'docker-hub', variable: 'DockerHubPwd')]) {
	                    sh "docker login -u shishirkhandelwal -p ${DockerHubPwd}"
	                    sh "docker push shishirkhandelwal/nodeappcicd:${DOCKER_TAG}"
	                    }
	            }   
	        }
	        stage('Deploy to kubernetes'){
	            steps{
	                sh "chmod +x changeTag.sh"
	                sh "./changeTag.sh ${DOCKER_TAG}"
	                sshagent(['kubernetes-master']) {
	                    sh "scp -o StrictHostKeyChecking=no services.yml node-app-pod.yml [email protected]:/home/ec2-user/"
	                    script{
	                        try{
	                            sh "ssh [email protected] kubectl apply -f ."
	                        }catch(error){
	                            sh "ssh [email protected] kubectl create -f ."
	                        }
	                    }
	                }
	            }
	        }
	    }
	}
	

	def getDockerTag(){
	    def tag  = sh script: 'git rev-parse HEAD', returnStdout: true
	    return tag
	}


The getDockerTag function retrieves and returns the commit id. The tag is used as a environment variable so that it can be accessed from any stage inside the Jenkins pipeline.

That concludes the walk through for creating a CI/CD pipeline.


Ahlem Marzouk

Big Data Developper | Data Consultant || Certified PL-300: Power BI: Data Analyst Associate

3 年

Good Job

回复
Ratik Puri

Placement Coordinator | MBA(BA) IIFT Delhi ‘25 | Ex-Optum (UHG) | Pianist

4 年

Nicely written!

回复

要查看或添加评论,请登录

Shishir Khandelwal的更多文章

  • Navigating API Gateway Choices: A Practical Q&A on AWS API Gateway vs. Kong

    Navigating API Gateway Choices: A Practical Q&A on AWS API Gateway vs. Kong

    Introduction This article is based on an indirect conversation I had with a startup's Head of Engineering while they…

    8 条评论
  • 5 Crucial Tips for a Startup Cloud Infrastructure

    5 Crucial Tips for a Startup Cloud Infrastructure

    Working at a startup has been a whirlwind of learning. When you're the first creator and owner of a critical part of…

  • Creating Validation Admission Webhooks Inside Kubernetes

    Creating Validation Admission Webhooks Inside Kubernetes

    This is the second part of a series of articles discussing Admissions Hooks in Kubernetes. Check out the first article…

    3 条评论
  • The Ultimate Guide To Admission Hooks in Kubernetes

    The Ultimate Guide To Admission Hooks in Kubernetes

    Inside Kubernetes, even the simplest task such as — the ‘Creation of a pod’ involves a lot of steps. Understanding…

    1 条评论
  • Hosting a webpage over custom domain & ssl

    Hosting a webpage over custom domain & ssl

    In this article, we will see the setup of the Domain name, Route53 and Certificate Manager. The main component of the…

    3 条评论
  • Top Kubernetes Commands To Work Faster

    Top Kubernetes Commands To Work Faster

    Kubernetes's kubectl can create objects in two ways - Declarative Used for creating resources from manifest files using…

    5 条评论
  • Automating Route53 record creations

    Automating Route53 record creations

    Kubernetes clusters use an Ingress Controller to expose applications to the outside world. For each endpoint or path…

    8 条评论
  • Understanding Public Key Infrastructure

    Understanding Public Key Infrastructure

    Public Key Infrastructure How does a client on the internet communicate with a server on the internet? Is this…

    2 条评论
  • Using Envconsul with Vault

    Using Envconsul with Vault

    In order to use & keep sensitive values safe — we require two things A place where sensitive information can be stored…

    4 条评论
  • Understanding Elasticsearch

    Understanding Elasticsearch

    Understanding the use case The format in which data is stored inside traditional databases like Postgres, Cassandra, or…

    4 条评论

社区洞察

其他会员也浏览了