DEVOPS AUTOMATION

DEVOPS AUTOMATION

No alt text provided for this image

This is the structure of the task.

Here we assume that you have installed Jenkins, Docker, and have git repo.

We have made three jobs JOB1, JOB2, JOB3.

Each job has its work so let's go stepwise and deploy the project.

Here lets assume the main developer as a master branch and developer1 as branch of master branch.

JOB1

JOB1 is fetching data from developer1 and launching it to a new container (Testing) through defined route, so that the Quality checking team can check the quality and verify the code and decide whether it is good to go or not.

STEPS TO MAKE JOB1:-

STEP1- Click on New item.

No alt text provided for this image

STEP2:- Enter the item name(It means a job name) and then select freestyle project and then Click OK.

No alt text provided for this image

STEP3:- Open configure option by clicking on its file and do the following changes in it.

No alt text provided for this image
IN REPOSITORY URL :There will be URL of your repo from which you want data to come.
IN BRANCH SPECIFIER: There will be branch fo developer1.
No alt text provided for this image

Now we have to build a trigger, here we are using "pole SCM".

No alt text provided for this image

Now we have to write command to execute "shell".

Commands to be given are as under, which can be copied and pasted. GitHub link is also provided to locate that file.

sudo cp -v -r -f * /testing
#!/bin/bash

if sudo docker ps | grep main os 
then 
echo "already running"

else
sudo docker run --name mainos -dit -p 8082:80 -v/mainenv:/usr/local/apache2/htdocs/ httpd
fi

No alt text provided for this image
No alt text provided for this image


After this click on the save button and check the output.

We are done with JOB1. Its time to check the output.

This is git repo from which data is been fetched by Jenkins.

No alt text provided for this image

This is the output from testing env this text is fetched from my dev1 branch.

No alt text provided for this image

So here our Firt job (JOB1) is complete. It will launch a new test container, that can be accessed by the quality checking team and if the container is already launched it will write the container already running.

Moving to Making job2

Steps to make JOB2:-

Step1:- Click on New Item.

No alt text provided for this image

Step2: Enter the item name. Here item name means job name. Select freestyle project and Click OK.

No alt text provided for this image

STEP3:- Open the configure option by clicking on it and do the following changes in it.

No alt text provided for this image

After opening the configure file, do the following changes in it.

IN REPOSITORY URL :There will be URL of your repo from which you want data to come.
IN BRANCH SPECIFIER:There will be branch of maindevp(master branch).
No alt text provided for this image


Now a trigger is to be built, here "pole SCM" is used.

No alt text provided for this image

Now we have to write command to execute "shell".

Commands to be given are as under, which can be copied and pasted. GitHub link is also provided to locate that file.

sudo cp -v -r -f * /mainenv/
#!/bin/bash
if
sudo docker ps | grep mainos
then
echo "already running"
else
sudo docker run --name mainos -dit -p 8082:80 -v /mainenv:/usr/local/apache2/htdocs/ httpd
fi
No alt text provided for this image

After this click on the save button and check the output.

Now we are done with JOB2. Its time to check the output.

This is git repo (master branch) file data.

No alt text provided for this image

This is the main environment output which is available to the client.

No alt text provided for this image

Here our second job (JOB2) is completed. It will launch a new "main container" that can be easily accessed by the client and if the container is already launched it will write the container already running.

So till here we have already automated two jobs and now the last job which is a little bit tricky. So let's proceed toward it.

Steps to make JOB3:-

Step1:-Click on New Item.

No alt text provided for this image

Step2: Enter item name, here item name is equal to job name and select freestyle project and click OK.

No alt text provided for this image

STEP3:- Open the configure option by clicking on it and do the following changes in it.

No alt text provided for this image

After opening the configure file, do the following changes in it.

IN REPOSITORY URL :There will be URL of your repo from which you want data to come.
IN BRANCH SPECIFIER:There will be branch of main developer(master branch).

Here remember to add credentials of the GitHub

No alt text provided for this image

You can get an additional behavior panel by clicking on it and from it select "merge before build" and follow the below method.

No alt text provided for this image

In Job 3, we are making a trigger token, so that it can be run by only the Quality management team. If the work passes all the quality norms, the Quality management team will run a token to merge the work with the main environment. In case work fails the Quality norms, Quality management team will send a message to "developer 1" to rectify the defect.

To generate token, here is the line

curl --user "username of Jenkin: password of Jenkins" Jenkinsurl/job/merging/build?token=redhat

example:

curl --user "admin: root" https://192.168.0.106:8080/job/merging/build?token=redhat

This trigger can only be run by a quality team to run a job.

No alt text provided for this image

Now we have to write command to execute "shell".

Command to be given are as follows, which can be copied and pasted. GitHub link is also provided to locate that file.

Here in Job3 if the testing container is running, it has to be deleted and if it's not there, a message is to be printed.

#!/bin/bash

if sudo docker ps | grep testenv
then
sudo docker rm -f testenv
else
echo "container removed"

fi
No alt text provided for this image

After this, select "post-build" option and do the following changes in it.

project to build = job2 name

As we need to merge test env into main and upload it on the main environment, make sure that rest changes are same as it is.

No alt text provided for this image

After this click on the save button.

We are done with JOB3. Its time to check the output.

Token is being run by a quality team.

No alt text provided for this image


Here is the final output.

No alt text provided for this image

Below we can see our testing env is destroyed.

No alt text provided for this image

So basically our JOB3 will merge two git branches and update it to the main environment and also destroy the testing environment as soon as the given URL is run by the quality team.

SUMMARY

In this whole project, we have automated system such that if "developer1" do some upgrade in the main developer( in the master branch), that work is first tested in the testing environment and it is checked by the quality management team and if work passes all quality norms, a token will be triggered by Quality management team, that will trigger JOB3 so that it can merge two branches and update the main branch to the main environment and destroy the testing environment.

Here is GitHub links for all commands

Thanks guys for reading this article. Hope you guys have enjoyed it.










要查看或添加评论,请登录

Nikhil G R的更多文章

  • Introduction to DBT (Data Build Tool)

    Introduction to DBT (Data Build Tool)

    dbt is an open-source command-line tool that enables data engineers and analysts to transform data in their warehouse…

  • DIFFERENCES IN SQL

    DIFFERENCES IN SQL

    WHERE vs HAVING WHERE and HAVING clauses are both used in SQL to filter data. WHERE WHERE clause should be used before…

  • Introduction to Azure Databricks (Part 2)

    Introduction to Azure Databricks (Part 2)

    DBFS (Databricks File System) It is a Distributed File System. It is mounted into a databricks workspace.

  • Introduction to Azure Databricks (Part 1)

    Introduction to Azure Databricks (Part 1)

    Databricks is a company created by the creators of Apache Spark. It is an Apache Spark based unified analytics platform…

  • Aggregate and Window Functions in Pyspark

    Aggregate and Window Functions in Pyspark

    Aggregate Functions These are the functions where the number of output rows will always be less than the number of…

  • Different ways of creating a Dataframe in Pyspark

    Different ways of creating a Dataframe in Pyspark

    Using spark.read Using spark.

  • Dataframes and Spark SQL Table

    Dataframes and Spark SQL Table

    Dataframes These are in the form of RDDs with some structure/schema which is not persistent as it is available only in…

  • Dataframe Reader API

    Dataframe Reader API

    We can read the different format of files using the Dataframe Reader API. Standard way to create a Dataframe Instead of…

  • repartition vs coalesce in pyspark

    repartition vs coalesce in pyspark

    repartition There can be a case if we need to increase or decrease partitions to get more parallesism. repartition can…

    2 条评论
  • Apache Spark on YARN Architecture

    Apache Spark on YARN Architecture

    Before going through the Spark architecture, let us understand the Hadoop ecosystem. The core components of Hadoop are…

社区洞察

其他会员也浏览了