JENKINS ON KUBERNETES

JENKINS ON KUBERNETES

We have run Jenkins on Docker, Windows,Linux etc..Sometime there may be a use case where we need to run Jenkins on Kubernetes Clusters. It can be done. There's no any issues in that. But while saving the jobs in Jenkins, Jenkins contain many user information and job information and we don't want that if pods or deployment get deleted our data also get deleted.To overcome this problem this article shows how to launch the deployment of the Jenkins with persistent volume(PV) and persistent volume claim(PVC) so that, if by any chance the pods or deployment got deleted our data won't be lost and we can easily relaunch the deployment or pod and can continue the work where we have left off.

All the yml code can be found on my github repository attached at the end of my article.

I am using windows as an Operating System and minikube for launching the kubernetes clusters.

Go to my Github repository attached at the end of my article, clone or download the yml files along with mail.py files. Copy the files into the desired directory and change the path to that directory so that it will be our working directory.

Start the minikube by typing the command minikube.exe start

Follow the steps:

run kubectl apply -f .

No alt text provided for this image

We can see three things that has been created.It will take some time for pulling the image and launching the deployment.

We can see PODS,PV and PVC got created.

No alt text provided for this image

Now we need to expose the 8080 port so that the deployment can access the jenkins.

No alt text provided for this image
No alt text provided for this image

Now get the minkube IP by running minikube IP in command prompt and get the port number and open it in the browser. We will see a screen something as shown below.

No alt text provided for this image

Go back to the command prompt and get logs of the pod by running kubectl logs pod name to see the secret key.

No alt text provided for this image

Now just follow normal steps to setup the Jenkins.

I have created job1 and user credential.

No alt text provided for this image

Now I have deleted my services and deployment. I have relaunched it.

It can seen that everything has been deleted.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Here we can see that I applied all the commands again and follow the same procedure to expose the deployment.

No alt text provided for this image

Here we can see that the job that has been created earlier has been deleted.

No alt text provided for this image

Till here, we have setup Jenkins on Kubernetes successfully with persistent storage so that the data won't be lost.

Now we will use master-slave system and repeat the same job as we did in.

The only difference is we will create a master and slave system.

I have created one more job for testing and notifying if pods are not working, else every job remain same as it is in the above article.

Remember to select Restrict where this project can run in all the jobs.

No alt text provided for this image
No alt text provided for this image

GITHUB LINK

CONCLUSION

This shows us that the Jenkins can be runned on Kubernetes so that if any of the pods get deleted also the job data wont be lost since we have used the persistent volume claim.


要查看或添加评论,请登录

Nikhil G R的更多文章

  • Introduction to DBT (Data Build Tool)

    Introduction to DBT (Data Build Tool)

    dbt is an open-source command-line tool that enables data engineers and analysts to transform data in their warehouse…

  • DIFFERENCES IN SQL

    DIFFERENCES IN SQL

    WHERE vs HAVING WHERE and HAVING clauses are both used in SQL to filter data. WHERE WHERE clause should be used before…

  • Introduction to Azure Databricks (Part 2)

    Introduction to Azure Databricks (Part 2)

    DBFS (Databricks File System) It is a Distributed File System. It is mounted into a databricks workspace.

  • Introduction to Azure Databricks (Part 1)

    Introduction to Azure Databricks (Part 1)

    Databricks is a company created by the creators of Apache Spark. It is an Apache Spark based unified analytics platform…

  • Aggregate and Window Functions in Pyspark

    Aggregate and Window Functions in Pyspark

    Aggregate Functions These are the functions where the number of output rows will always be less than the number of…

  • Different ways of creating a Dataframe in Pyspark

    Different ways of creating a Dataframe in Pyspark

    Using spark.read Using spark.

  • Dataframes and Spark SQL Table

    Dataframes and Spark SQL Table

    Dataframes These are in the form of RDDs with some structure/schema which is not persistent as it is available only in…

  • Dataframe Reader API

    Dataframe Reader API

    We can read the different format of files using the Dataframe Reader API. Standard way to create a Dataframe Instead of…

  • repartition vs coalesce in pyspark

    repartition vs coalesce in pyspark

    repartition There can be a case if we need to increase or decrease partitions to get more parallesism. repartition can…

    2 条评论
  • Apache Spark on YARN Architecture

    Apache Spark on YARN Architecture

    Before going through the Spark architecture, let us understand the Hadoop ecosystem. The core components of Hadoop are…

社区洞察

其他会员也浏览了