Continuous Integration & Deployment (CI/CD) | Using a Jenkins Pipeline to Deploy an App in a Kubernetes Cluster.

Continuous Integration & Deployment (CI/CD) | Using a Jenkins Pipeline to Deploy an App in a Kubernetes Cluster.

Our Tastes, Experiences and Expectations are constantly changing. This habit has been heightened by Social Media and Technology in general. The need for immediate results and outcomes is seen in general life, the media etc. Technology, however is no exception, it has to be agile and responsive to changing business dynamics.

DevOps provides an answer to this changing dynamic as its philosophy is inclined towards continuous integration and deployment. This ensures that applications are self-healing, immutable and cost effective as well. Its goal is permanently solving Technology related problems with a strong emphasis on Creativity rather than Technical know-how.

The Goal of this Article is to demonstrate how to use a Jenkins Pipeline to Continuously deploy a Webpage across disparate systems.

No alt text provided for this image

REQUIREMENTS

1)A JENKINS SERVER

2) A DOCKER HUB REPOSITORY

3) A GITHUB REPOSITORY

4) A KUBERNETES CLUSTER

5) DOCKER

STEP 1: We create a Jenkins Job

No alt text provided for this image

STEP 2: We define our Pipeline Script (Will break this down later in the Article)

No alt text provided for this image

STEP 3: From the screenshot below we can see that our Kubernetes Cluster is blank with just the defaults.

No alt text provided for this image

STEP 4: We run our Job (Fingers Crossed!)

No alt text provided for this image
we
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/JENKINS_PIPELINE_LINKEDIN
[Pipeline] {
[Pipeline] stage
[Pipeline] { (GITHUB)
[Pipeline] sh
+ git clone https://github.com/nugowe/testing.git
Cloning into 'testing'...
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (DOCKER_IMAGE_BUILD)
[Pipeline] sh
+ cd /var/lib/jenkins/workspace/JENKINS_PIPELINE_LINKEDIN/testing
+ docker build -t nosaugowe/arsenaljenkins:latest .
Sending build context to Docker daemon  160.8kB

Step 1/2 : FROM nginx:latest
latest: Pulling from library/nginx
852e50cd189d: Pulling fs layer
571d7e852307: Pulling fs layer
addb10abd9cb: Pulling fs layer
d20aa7ccdb77: Pulling fs layer
8b03f1e11359: Pulling fs layer
d20aa7ccdb77: Waiting
8b03f1e11359: Waiting
addb10abd9cb: Verifying Checksum
addb10abd9cb: Download complete
d20aa7ccdb77: Verifying Checksum
d20aa7ccdb77: Download complete
8b03f1e11359: Verifying Checksum
8b03f1e11359: Download complete
571d7e852307: Verifying Checksum
571d7e852307: Download complete
852e50cd189d: Verifying Checksum
852e50cd189d: Download complete
852e50cd189d: Pull complete
571d7e852307: Pull complete
addb10abd9cb: Pull complete
d20aa7ccdb77: Pull complete
8b03f1e11359: Pull complete
Digest: sha256:6b1daa9462046581ac15be20277a7c75476283f969cb3a61c8725ec38d3b01c3
Status: Downloaded newer image for nginx:latest
 ---> bc9a0695f571
Step 2/2 : ADD index.html /usr/share/nginx/html
 ---> fc71bfe446c2
Successfully built fc71bfe446c2
Successfully tagged nosaugowe/arsenaljenkins:latest
+ docker images
REPOSITORY                 TAG                 IMAGE ID            CREATED                  SIZE
nosaugowe/arsenaljenkins   latest              fc71bfe446c2        Less than a second ago   133MB
nginx                      latest              bc9a0695f571        2 weeks ago              133MB
+ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (DOCKER HUB PUSH)
[Pipeline] withCredentials
Masking supported pattern matches of $nugowe or $password
[Pipeline] {
[Pipeline] sh
+ docker push nosaugowe/arsenaljenkins:latest
The push refers to repository [docker.io/nosaugowe/arsenaljenkins]
d8c32f253f9d: Preparing
7e914612e366: Preparing
f790aed835ee: Preparing
850c2400ea4d: Preparing
7ccabd267c9f: Preparing
f5600c6330da: Preparing
f5600c6330da: Waiting
7ccabd267c9f: Layer already exists
7e914612e366: Layer already exists
f790aed835ee: Layer already exists
850c2400ea4d: Layer already exists
f5600c6330da: Layer already exists
d8c32f253f9d: Pushed
latest: digest: sha256:3e07352d6bc0f79b6032a718e44a46a07190a72d77f4ea5d10e3d489ddcdcf2f size: 1569
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (ARTIFACT DEPLOYMENT)
[Pipeline] sh
+ microk8s.kubectl create namespace arsenaljenkinspractice
namespace/arsenaljenkinspractice created
[Pipeline] sh
+ microk8s.kubectl create deployment arsenaljenkinsupdate --image=nosaugowe/arsenaljenkins:latest --replicas=2 --namespace=arsenaljenkinspractice --replicas=3
deployment.apps/arsenaljenkinsupdate created
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

Success!! As seen above, All four Stages of our pipeline successfully ran! What does this mean you might ask? Well.. We were able to not only successfully pull resources across disparate systems but build immutable infrastructure as well. The benefits of this is that Github provides Versioning while Jenkins Tests the code across each stage thereby ensuring that Applications and Services are less likely to break. The Applications can be rendered on the fly should need be. It is agile and responsive.

STEP 5: Checking our Kubernetes Cluster..

No alt text provided for this image

Our Deployment was successful. All pods are healthy and running as planned.

No alt text provided for this image

In order to view our Web Artifact, we need to know the IP Addresses of our Pods to access them (See screenshot above). Kindly note that this is just for demonstration purposes only. It is bad practice to expose your node ports outside the cluster. Ideally it should be exposed as a Service with a Load Balancer to route the traffic.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

As seen above we have been able to view 3 webpages across the 3 IP's of the Pods. Ideally in a Prod setup, the Load Balancer would perform some round robbing sequence and route traffic across all three pods based in traffic etc. For Fans of the English Premier League, I am a shameless Arsenal Fan.

This Pipeline can be automated via Webhooks from my repository provider GitHub, in my case, however, I am running this from my Laptop so, we can define Cron Jobs instead.

No alt text provided for this image


OUR PIPELINE CODE:

node{
    
    stage('GITHUB'){
        sh 'git clone https://github.com/nugowe/testing.git'
    }
    
    stage('DOCKER_IMAGE_BUILD'){
        sh script:'''
        #!/bin/bash
        cd /var/lib/jenkins/workspace/PIPELINE_JENKINS_TEST/testing
        docker build -t nosaugowe/arsenaljenkins:latest .
        docker images
        docker ps -a
        '''
        
    }
    
    stage('DOCKER HUB PUSH'){
        withCredentials([
    usernamePassword(credentialsId: 'DockerHubCred', usernameVariable: 'nugowe', passwordVariable: 'password')
    
]){
    sh 'docker push nosaugowe/arsenaljenkins:latest'
}
    }
    
    stage('ARTIFACT DEPLOYMENT'){
        sh 'microk8s.kubectl create namespace arsenaljenkinspractice'
        sh 'microk8s.kubectl create deployment arsenaljenkinsupdate --image=nosaugowe/arsenaljenkins:latest --replicas=2 --namespace=arsenaljenkinspractice --replicas=3'
        sh 'microk8s.kubectl get all --namespace=arsenaljenkinspractice'
    }
    
   
}

I then to prefer Jenkins Groovy Script as you can not only slam different code languages in building your pipelines.

SUMMARY:

We have demonstrated how we can build Immutable, Stable Infrastructure Continuously on as need demand. This iterates what I mentioned earlier on the need to match Technology with the changing business dynamics of any given Organization.

NEXT:

I would attempt to use a Jenkins Pipeline to deploy a Docker Image to AWS's ECR.

要查看或添加评论,请登录

Nosa Ugowe的更多文章

社区洞察

其他会员也浏览了