CI/CD Pipeline With Jenkins Dynamic Distributed Cluster and Kubernetes
Hello everyone,
I am back again with new Article so much interesting. So, let me first tell you something about jenkins clusters.
What is a Jenkins Cluster ?
Basically a jenkins cluster is a master-slave system in which jenkins server runs in the master or the node(OS) in which jenkins server is running is known as master. By default whatever the jobs we build, jenkins build/run them in the master or the node in which it is running. But in real world we have large number of jenkins jobs , so it is impossible to run all the jobs in the master as master node has limited resources like RAM , CPU , Storage etc. So we need some more other nodes in which we can divide and run the jobs and these nodes are known as slave nodes. And jenkins master control these slave nodes and decides in which slave have to run which job. And this complete master-slave system is known as a Cluster.
In jenkins there are 2 types of clusters.
Static cluster :- In this type of cluster , we can use any virtual machine or any OS as the slave nodes. Here we have to launch node and setup it manually before job start building. And in this these slave nodes runs continuously whether the job is building or not and this is a a heavy wastage of power in terms of RAM ,CPU.
Dynamics cluster :- Here we use slave nodes as docker containers and in this we have to connect the jenkins master with the docker server/engine for one time only. So when a job starts building then jenkins starts a docker container and runs the job in it and when the job builds completely then it automatically terminates the container.
So, in real world we mainly uses Jenkins Dynamic Cluster as it has many advantages over the static one. So in this article , i will show you that how to achieve this.
Today's Task :
Create A dynamic Jenkins cluster !
Steps to proceed as:
1. Create container image that’s has Linux and other basic configuration required to run Slave for Jenkins. ( example here we require kubectl to be configured )
2. Create a job chain of job1 & job2 using build pipeline plugin in Jenkins
3. Job1 : Pull the Github repo automatically when some developers commint repo in Github and perform the following operations as:
1. Create the new image dynamically for the application and copy the application code into that corresponding docker image
2. Push that image to the docker hub (Public repository)
( Github code contain the application code and Dockerfile to create a new image )
4. Job2 ( Should be run on the dynamic slave of Jenkins configured with Kubernetes kubectl command): Launch the application on the top of Kubernetes cluster performing following operations:
1. If launching first time then create a deployment of the pod using the image created in the previous job. Else if deployment already exists then do rollout of the existing pod making zero downtime for the user.
2. If Application created first time, then Expose the application. Else don’t expose it.
Description :
1. First we have create container image that’s has Linux and other basic configuration required to run Slave for Jenkins. ( example here we require kubectl to be configured )
Here i am creating my own docker image by Dockerfile concept this file you can build in of the OS and the code that i write in my dockerfile is
In this docker image i have installed SSH server because jenkins will need the shell of the slave node to run the job this will be clarify more when we will configure Dynamic Cloud , so in the slave node we need to start the SSH services so that jenkins can use it. And i also installed kubectl client and provided all the key and configuration file so that kubectl client can connect to server and use its services. Kubernetes (server) is running in different OS.
Configuration file used in image;
apiVersion: v1 kind: Config clusters: - cluster: server: https://192.168.99.107:8443 certificate-authority: /root/ca.crt name: lw contexts: - context: cluster: lw user: omprakash users: - name: omprakash user: client-key: /root/client.key
client-certificate: /root/client.crt
By below commands we build this Dockerfile to create docker image and push it to docker hub.
docker build -t omprakash1234/kub:v1 . docker push omprakash1234/kub:v1 .
Job1 :
This will pull the Github repo automatically when some developers commit repo in Github and perform the following operations as:
We have to create the new image dynamically for the application and copy the application code into that corresponding docker image
Here scenario is that developer will keep on update the application and we has one Dockerfile that is
From centos:latest RUN yum install httpd -y COPY *.html /var/www/html CMD /usr/sbin/httpd -DFOREGROUND
EXPOSE 80
As we are copying Dockerfile as well as web.html in /root/ so once it start building, image will be created having the application file
Now we have to Push that image to the docker hub (Public repository) for that we used docker push command here your jenkins OS should be already login to docker hub.
You might be thinking where we will use this image, however kubernetes will use this image to launch our application??
As we have to this run job2 on the dynamic slave of Jenkins configured so let's configure it.
Since by default docker server is accessible within the VM only , so we have to make it to be accessible outside the VM or by anyone in our case Jenkins will try to connect this Docker. So for this we have to go to this file /usr/lib/systemd/system/docker.service and then write the as shown in the image below on any port no . This we are doing in the node where we have to dynamically create Docker.
Again coming to Jenkins now we have to setup dynamic node before that we have to install Docker plugin.
After that go to Jenkins > Manage Jenkins > Manage Nodes and Clouds > Configure Clouds and select Docker. And then fill the details as shown in the below images.
In Docker HOST URL we have to write the ip address of the VM where docker engine is running mean where you want to create Dynamic Docker as a slave.
Now, our Jenkins Dynamic Cluster job is ready to use and now if we run a job in jenkins then it will first contact to the docker engine then docker will create a docker container as the slave node and then jenkins will run the job in that slave node.
And further as this slave node have Kubectl it can launch application on the top of Kubernetes that we had done in job2.
Job 2:
Here in this image you can see that i make this job to build in a slave node i.e. docker slave.
This job should be run just after job1 so i done that below
This will launch the application on the top of Kubernetes cluster performing following operations:
If launching first time then create a deployment of the pod using the image created in the previous job. Else if deployment already exists then do rollout of the existing pod making zero downtime for the user.
I used here rollout restart because we are using the image which have updated application without updating it.
Here my deployment is created and all the pods is exposed. we can see deployment file which is used
apiVersion: v1 kind: Service metadata: name: cluster-service labels: app: webapp spec: selector: app: webapp type: NodePort ports: - nodePort: 31300 port: 81 targetPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: myweb labels: app: webapp spec: replicas: 3 selector: matchLabels: app: webapp template: metadata: name: myweb labels: app: webapp spec: containers: - name: myweb-img
image: omprakash1234/webserver
Here is the Build Pipeline output
Now we can see before and after our site...
Finally rollout done successfully ....
THANKS FOR READING !!!
--
1 年Awesome
Sr. Full Stack Developer (Expertise in Nodejs|Angular 2-18|React|express|MongoDB| Postgres|Microservices) and DevOps Engineer with Expertise (Docker|Kubernetes|Jenkins|Helm|Terraform|Ansible|Grafana|Prometheus|AWS|Linux)
1 年Awesome