Automated System using Groovy | 
            Jenkins | kubernetes

Automated System using Groovy | Jenkins | kubernetes

Another thing that is still manual is creating Jobs in Jenkins. And for this purpose the developer team is dependent on the Operation guys. But why depend on them when the developers have their hands strong in Coding . Can the developer themselves create a code to configure jobs automatically ? So here i present a way by which using the DSL(Domain specific language) for Jenkins ie. GROOVY, this manual thing can be automated.

Another important reason why writing a code for configuring the jobs would be a good idea rather than doing it via the webUI is that any kind of automation / customization cant be achieved using the WebUI.

  • Let’s say the developer has written a code and pushed it on github. We create a seed job, also known as the admin job which pulls the code written in groovy for the configuration of all the further jobs from the GitHub.
No alt text provided for this image

When for the first time you run the admin job, it fails because the script isn't approved.

SOLUTION :Manage Jenkins -- In-process script approval -- approve.

No alt text provided for this image

But this again is a manual approach so, another solution for doing the same is : Manage Jenkins => Configure Global Security => CSRF Protection => UNCHECK

No alt text provided for this image

Now when we run our admin job, it creates 3 jobs and 1 pipeline view !

No alt text provided for this image

The problem statement for creating these 3 jobs is :

  • Job1-- It will pull all the webpages from github into our base repository.
  • Job 2 -- By looking at the code, Jenkins launches the respective container, deploys the code and starts an interpreter.
  • Job 3 -- This job tests the program/code. If it is working fine deploy and run the Webpage on the Production Environment and if not send a mail to the developer.

In this architecture, Monitoring is not required, because instead of using Docker we have used Kubernetes that will manage the pods and redeploy them whenever needed.

job("groovyjob1"){
		description("Job to pull the developer code from github")
		scm {
			github('khushi20218/devopsrepo.git','master')
		}
		triggers {
	        githubPush()
	    }
	

		steps{
		shell('sudo cp * /task3')
		}
}

This code generates job1, which pulls the webpages from github whenever developer pushes any code.

No alt text provided for this image

For complete automation, so that GitHub tells Jenkins that it has to now pull the files I used the web-hooks of GitHub. Configuration of webhooks:

No alt text provided for this image
No alt text provided for this image

In the payload URL we have to put URL/github-webhook. Or we can also use a software ngrok which allows us to use our private IP as a Public IP.

# ./ngrok http 8080
No alt text provided for this image

Everything configured as per our requirement !!

Now we move towards our job2. Our Job 2 will be using a Dynamic Slave Node i.e. Kubernetes and will run on that itself. Before configuring the slave node we need to make some changes inside our docker, because the tool that is used behind the scene for setting up slave nodes is docker.

No alt text provided for this image

We need to edit this in the docker configuration file so that anybody from any IP can connect with docker at the specified port. Now, you can configure the slave node in the following way. The Dockerfile for setting up the Kubectl client:

FROM ubuntu:16.04

RUN apt-get update && apt-get install -y openssh-server
RUN apt-get install openjdk-8-jre -y
RUN mkdir /var/run/sshd
RUN echo 'root:redhat' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile


 #kubectl setup
RUN apt-get install curl -y
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

?Now, you can configure the slave node in the following way.

Manage Jenkins--Manage nodes and clouds--Add a new cloud

No alt text provided for this image
No alt text provided for this image

Along with these steps you also need to specify the volume ie. where all the files that are needed to run the kubectl are kept which are ca.crt, client.key ,client.crt and config file for kubectl.

Now for the deployment and the service configuration we created two config files:

deployment.yaml

No alt text provided for this image

service.yaml

No alt text provided for this image

Now we are ready to deploy our website using pods in Kubernetes. When job2 would start it would trigger the creation of the slave node with the required label. Then job 2 will do its work of deploying the pods as per requirement.

job("groovyjob2")


	{
	label('kubectl')


	triggers {
        upstream('groovyjob1', 'SUCCESS')
}
steps{
	shell('''cd /root/.kube
python3 a.py
count=$(kubectl get deployment | grep website | wc -l)
if [ $count -eq 1 ]
then
echo "deployment already running !"
else
kubectl create -f deployment.yaml
kubectl create -f service.yaml
fi''')
	}
}

When we run the groovy code, our job2 gets configured :

No alt text provided for this image

Our job2 would judge the type of the file and would start the respective interpreter. For this purpose we created a python code which would print all the extensions of the files in the respective folder.

import os,glob

os.chdir = ("/root/.kube")
files=[]
for f in glob.glob("*"):
      a = os.path.splitext(f)[1]
      files.append(a)

print("All extensions are ", files)

Now job2 will create the Deployment and the service and if the pod already exists then replace the image used inside the Deployment using the Kubectl command.

Job3 would check whether the website is properly deployed or not. And if not would send an email to the developer.

job("groovyjob3")
	{
	label('kubectl')


	triggers {
        upstream('groovyjob2', 'SUCCESS')
}
steps{
	shell('''if kubectl get deployments website
then
echo "pod running "
curl 192.168.99.104:32000
else 
exit 1
fi''')
	}


publishers {
        extendedEmail {
            recipientList('[email protected]')
            defaultSubject('Unstable build')
            defaultContent('The build is unstable')
            contentType('text/html')
            triggers {
                failure {
		    attachBuildLog(true)
                    subject('Unstable build')
                    content('The build is unstable')
                    sendTo {
                        developers()
                    }
                }
            }
        }
    }
}

No alt text provided for this image
No alt text provided for this image

We do not need to make a seperate job for the redeployment of the pod beacuse kubernetes behind the scene would take care and would automatically redeploy if the pod goes down.

Also, we create a build pipeline for the above jobs:

buildPipelineView('groovy pipeline') {
    title('task6groovy')
    displayedBuilds(5)
    selectedJob('groovyjob1')
    showPipelineParameters(true)
    refreshFrequency(3)
}

No alt text provided for this image

That's all !! Do leave your valuable feedbacks . For any queries or correction feel free to contact.

Shivam Gupta

Java | Spring Boot | Microservices | Backend Developer | Researcher | Machine Learning | MLH Best Social Good Hack Winner

4 年

Great Work ??

Akul Maurya

I do play with Petabytes of Data professionally.

4 年

Nice

Vimal Daga

World Record Holder | 2x TEDx Speaker | Philanthropist | Sr. Principal Consultant | Entrepreneur | Founder LW Informatics | Founder Hash13 pvt ltd | Founder IIEC

4 年

Great

要查看或添加评论,请登录

Khushi Thareja的更多文章

社区洞察

其他会员也浏览了