OpenShift 4.X CI/CD - OpenShift Pipelines
Image Source: https://cloudowski.com/articles/honest-review-of-openshift-4

OpenShift 4.X CI/CD - OpenShift Pipelines

OpenShift Pipelines is a cloud-native CI/CD solution for building pipelines that is based on open-source project Tekton. In a future revision of this article, I'll be adding a link to a detailed Tekton article to explain its working principle under the hood of OpenShift Pipelines. This article will cover the following three areas:

  • Challenges around traditional CI/CD server
  • How OpenShift Pipelines uses Tekton under the hood to address this problem
  • Hands-on part to create and build your own OpenShift Pipelines

Traditional CI/CD Server Challenges

No alt text provided for this image

Looking at the above image, it is obvious that there are lots of options on choosing CI/CD tools (and more tools are added as we speak). It's great to have choices but too many choices can lead to confusion and fragmentation. Just like developers, our customers are having challenges making their own tooling decisions. Let's zoom in on one very popular CI/CD tool and address a few challenges around the monolithic CI/CD server.

Jenkins image from https://jenkins.io/artwork/

To be totally clear on this point, Jenkins is really powerful in what it does and there's a reason why its chosen as one of the most popular build servers out there. With that out of the way, let's address the challenges around the traditional CI/CD server with Jenkins as a reference.

  • Plug-ins everywhere: Plug-ins allow users the choice of adding extra features without requiring them to dedicate resources to those features if they don't wish to use them. But for Jenkins, plugins do not provide access to optional functionality extending beyond the core features required to use the platform. From docker to GitHub Integration, you seem to need plug-ins for common use-cases that are expected to be available baked-in with the server. To be sure, many of Jenkins’s 1,500 plugins provide functionality that not everyone needs but the fact that you need plugins in Jenkins to do just about anything is problematic. The bigger issue at play here is that most of Jenkins’ plugins are written by third parties, vary in quality, and may lose support without notice.
  • Jenkins is pre-container: Although CI servers are often part of the modern DevOps conversation, many of them (looking at you Jenkins!) are actually a relatively old technology — long before anyone was envisioning containers and microservices as the infrastructure of choice for software deployment. As a result, traditional CI servers don’t do much to help teams take full advantage of next-generation infrastructure like Docker containers. They integrate with Docker rather awkwardly, via multiple plugins. Actually, Jenkins has no fewer than 14 different plugins with Docker in their names. In an increasingly container-native world, this is not a good way for a CI server to operate.
  • CI!=CD: Most likely the biggest problem with Jenkins, and CI servers in general, is that software delivery teams sometimes conflate continuous integration with continuous delivery (CD). In fact, CI and CD are different things. CD also requires release automation into the environment you happen to be working with, whatever that is as well as communication tools and channels to enable the software delivery team to collaborate seamlessly. When organizations set up a CI server and immediately consider their software delivery modernization work done, they are making a big mistake.

OpenShift Pipelines

OpenShift Pipelines comes as an operator on Openshift4.X OperatorHub:

No alt text provided for this image

It features to build container images with tools such as S2I and Buildah. OpenShift Pipelines also allows the deployment of applications to multiple platforms such as Kubernetes, serverless, and VMs. The core pieces of the pipeline, which are Tekton components, are easy to extend and portable across any Kubernetes platform. OpenShift Pipelines is designed for microservices and decentralized teams and is integrated with the OpenShift Developer Console. Finally, OpenShift Pipelines, as a Kubernetes-native CI/CD pipelines, offers all the flexibilities and power that Kubernetes has to offer - such as, self-healing, auto-scaling etc.

Hands-on with OpenShift Pipelines

Step one: Installing the Pipelines Operator

OpenShift Pipelines are an OpenShift add-on that can be installed via an operator that is available in the OpenShift OperatorHub. Operators may be installed into a single namespace and only monitor resources in that namespace. The OpenShift Pipelines Operator installs globally on the cluster and monitors and manages pipelines for every single user in the cluster. You can install the operator using the "Operators" tab in the web console, or you can use the CLI tool "oc". In this exercise, I use the latter.

# Login from the admin perspective
oc login -u admin -p admin

# Check openshift-pipelines-operator package from OperatorHub
oc describe packagemanifest openshift-pipelines-operator -n openshift-marketplace

From that package manifest, you can find all the information that you need to create a Subscription to the Pipeline Operator.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-pipelines-operator
  namespace: openshift-operators 
spec:
  channel: dev-preview
  installPlanApproval: Automatic
  name: openshift-pipelines-operator
  source: community-operators 
  sourceNamespace: openshift-marketplace

  startingCSV: openshift-pipelines-operator.v0.8.2

The channel, name, starting CSV, source and source namespace are all described in the package file you just described. You can find more information on how to add operators on the OpenShift documentation page. For now, all you need to do is apply the associated YAML file.

oc apply -f ./operator/subscription.yaml

Verify Installation

The OpenShift Pipelines Operator provides all its resources under a single API group: tekton.dev. This operation can take a few seconds; you can run the following script to monitor the progress of the installation.

until oc api-resources --api-group=tekton.dev | grep tekton.dev &> /dev/null
do 
 echo "Operator installation in progress..."
 sleep 5
done

echo "Operator ready"

Once you see the message Operator ready, the operator is installed, and you can see the new resources by running:

oc api-resources --api-group=tekton.dev

Verify user roles

To validate that your user has the appropriate roles, you can use the oc auth can-i command to see whether you can create Kubernetes custom resources of the kind needed by the OpenShift Pipelines Operator. The custom resource you need to create an OpenShift Pipelines pipeline is a resource of the kind pipeline.tekton.dev in the tekton.dev API group. To check that you can create this, run:

oc auth can-i create pipeline.tekton.dev

Or you can use the simplified version:

oc auth can-i create Pipeline

When run, if the response is yes, you have the appropriate access.

Verify that you can create the rest of the Tekton custom resources needed for this article by running the commands below. All of the commands should respond with yes.

oc auth can-i create Task

oc auth can-i create PipelineResource

oc auth can-i create PipelineRun

Now that we have verified that you can create the required resources let's start with configuring the Pipeline Service account.

Step two: Pipeline Service Account

In this article, the pipeline you create uses tools such as s2i and Buildah to create a container image for an application and build the image. Building container images using build tools (such as s2i, Buildah and Kaniko) require privileged access to the cluster. OpenShift default security settings do not allow access to privileged containers unless correctly configured.

This operator has created a ServiceAccount with the required permissions to run privileged pods for building images. The name of this service account is easy to remember. It is named pipeline. You can verify that the pipeline has been created by running the following command:

oc get serviceaccount pipeline

In addition to privileged security context constraints (SCC), the pipeline service account also has the edit role. This set of permissions allows pipeline to push a container image to OpenShift's internal image registry. pipeline is only able to push to a section of OpenShift's internal image registry that corresponds to your OpenShift project namespace. This namespacing helps to separate projects on an OpenShift cluster.

The pipeline service account executes PipelineRuns on your behalf. You can see an explicit reference for a service account when you trigger a pipeline run later in this article. In the next section, you will set up the sample application on OpenShift that is deployed in this article.

Step three: Deploying a sample application

For this article, you're going to use a simple Node.js application that interacts with a MongoDB database. This application needs to deploy in a new project (i.e. Kubernetes namespace). You can start by creating the project with:

oc new-project lab-tekton

For this tutorial, you will deploy the nodejs-ex sample application from the sclorg repository.

To prepare for nodejs-ex 's eventual deployment, you create Kubernetes objects that are supplementary to the application, such as a route (i.e. URL). The deployment will not complete since there are no container images built for the nodejs-ex application yet. You will complete this deployment in the following sections through a CI/CD pipeline.

Create the supplementary Kubernetes objects by running the command below:

oc create -f sampleapp/sampleapp.yaml

nodejs-ex also needs a MongoDB database. You can deploy a container with MongoDB to your OpenShift project by running the following command:

oc new-app centos/mongodb-36-centos7 -e MONGODB_USER=admin MONGODB_DATABASE=mongodb MONGODB_PASSWORD=secret MONGODB_ADMIN_PASSWORD=super-secret

You should see --> Success in the output of the command, which verifies the successful deployment of the container image.

The command above uses a container image with a CentOS 7 operating system and MongoDB 3.6 installed. It also sets environment variables using the -e option. MongoDB needs these environment variables for its deployment, such as the username, database name, password, and the admin password.

A service is an abstract way to expose an application running on a set of pods as a network service. Using a service name allows nodejs-ex to reference a consistent endpoint in the event the pod hosting your MongoDB container is updated from events such as scaling pods up or down or redeploying your MongoDB container image with updates.

You can see all the services, including the one for nodejs-ex in your OpenShift project by running the following command:

oc get services

Now that you are familiar with Kubernetes services go ahead and connect nodejs-ex to the MongoDB. To do this, set the connection string in an environment variable by running the following command:

oc set env dc/nodejs-ex MONGO_URL="mongodb://admin:secret@mongodb-36-centos7:27017/mongodb"

Verify the deployment

To verify the creation of the resources needed to support nodejs-ex and the MongoDB, you can head out to the OpenShift web console.

You need to log in with username admin and password admin.

Make sure the Developer option from the dropdown in the top left corner of the web console is selected as Developer perspective.

Next, select the Project dropdown menu shown below and choose the project namespace you have been working with (lab-tekton).

Next, click on the Topology tab on the left side of the web console if you don't see what's in the image below. Once in the Topology view, you can see the deployment config for the nodejs-ex application and the MongoDB, which should look similar to the image below:

No alt text provided for this image

You'll notice the white circle around the nodejs-ex deployment config. This highlight means that the nodejs-ex application isn't running yet. More specifically, no container hosting the nodejs-ex application has been created, built, and deployed yet.

The mongodb-36-centos7 deployment config has a dark blue circle around it, meaning that a pod is running with a MongoDB container on it. The MongoDB should be all set to support the nodejs-ex application at this point.

In the next section, you'll learn how to use Tekton tasks.

Step four: Create Tasks

Tasks consist of some steps that get executed sequentially. Each step gets executed in a separate container within the same task pod. They can have inputs and outputs to interact with other Tasks as part of a pipeline.

For this exercise, you will create the s2i-nodejs task from the catalogue repositories using oc. This is the first of two tasks you add to your pipeline for this article.

The s2i-nodejs task has been broken into pieces below to help highlight its key aspects.

s2i-nodejs starts by defining a property called inputs, as shown below. Underneath inputs, a property called resources specify that a resource of type git is required. This property indicates that this task takes a git repository as an input.

spec:
  inputs:
    resources:
      - name: source

        type: git

The params property below defines fields that must be specified when using the task (e.g. the version of Node.js to use).

params:
      - name: VERSION
        description: The version of the nodejs
        default: '12'
      - name: PATH_CONTEXT
        description: The location of the path to run s2i from.
        default: .
      - name: TLSVERIFY
        description: Verify the TLS on the registry endpoint (for push/pull to a non-TLS registry)
        default: "true"

There is also an outputs property shown below that is used to specify that something is output as a result of running this task. The type of output is image. This property specifies that this task creates an image from the git repository provided as an input.

Many resource types are possible and not only limited to git and image. You can find out more about the possible resource types in the Tekton documentation.

outputs:
    resources:
      - name: image

        type: image

For each step of the task, a steps property is used to define what steps will run during this task. Each step is denoted by its name. s2i-nodejs has three steps:

generate

- name: generate
      image: quay.io/openshift-pipeline/s2i
      workingdir: /workspace/source
      command: ['s2i', 'build', '$(inputs.params.PATH_CONTEXT)', 'centos/nodejs-$(inputs.params.VERSION)-centos7', '--as-dockerfile', '/gen-source/Dockerfile.gen']
      volumeMounts:
        - name: gen-source
          mountPath: /gen-source
      resources:
        limits:
          cpu: 500m
          memory: 1Gi
        requests:
          cpu: 500m

          memory: 1Gi

build

 - name: build
      image: quay.io/buildah/stable
      workingdir: /gen-source
      command: ['buildah', 'bud', '--tls-verify=$(inputs.params.TLSVERIFY)', '--layers', '-f', '/gen-source/Dockerfile.gen', '-t', '$(outputs.resources.image.url)', '.']
      volumeMounts:
        - name: varlibcontainers
          mountPath: /var/lib/containers
        - name: gen-source
          mountPath: /gen-source
      resources:
        limits:
          cpu: 500m
          memory: 1Gi
        requests:
          cpu: 500m
          memory: 1Gi
      securityContext:

        privileged: true

push

  - name: push
      image: quay.io/buildah/stable
      command: ['buildah', 'push', '--tls-verify=$(inputs.params.TLSVERIFY)', '$(outputs.resources.image.url)', 'docker://$(outputs.resources.image.url)']
      volumeMounts:
        - name: varlibcontainers
          mountPath: /var/lib/containers
      resources:
        limits:
          cpu: 500m
          memory: 1Gi
        requests:
          cpu: 500m
          memory: 1Gi
      securityContext:

        privileged: true

Each step above runs serially in its own container. Since the generate step uses an s2i command to generate a Dockerfile from the source code from the git repository input, the image used for its container has s2i installed.

The build and push steps both use a Buildah image to run commands to build the Dockerfile created by the generate step and then push that image to an image registry (i.e. the output of the task).

You can see the images used for both these steps via the image property of each step.

The order of the steps above (i.e. 1. generate 2. build 3. push) is used to specify when these steps should run. For s2i-nodejs, this means generate will run followed by build and then the push step will execute last.

Under the resources property of each step, you can define the amount of resources needed for the container in terms of CPU and memory.

resources:
limits:
    cpu: 500m
    memory: 1Gi
requests:
    cpu: 500m

    memory: 1Gi

You can view the full definition of this task in the OpenShift Pipelines Catalog GitHub repository or by using

cat ./tektontasks/s2i-nodejs-task.yaml

Create the s2i-nodejs task that defines and builds a container image for the nodejs-ex application and push the resulting image to an image registry:

oc create -f tektontasks/s2i-nodejs-task.yaml

In the next section, you will examine the second task definitions that will be needed for our pipeline.

Step five - Task Resource Definitions

The openshift-client task you will create is simpler as shown below:

apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: openshift-client
spec:
  inputs:
    params:
      - name: ARGS
        description: The OpenShift CLI arguments to run
        default: help
  steps:
    - name: oc
      image: quay.io/openshiftlabs/openshift-cli-tekton-workshop:2.0
      command: ["/usr/local/bin/oc"]
      args:
        - "$(inputs.params.ARGS)"

openshift-client doesn't have any inputs or outputs associated with it. It also only has one step named oc.

This step uses an image with oc installed and runs the oc root command along with any args passed to the step under the args property. This task allows you to run any command with oc. You will use it to deploy the image created by the s2i-nodejs task to OpenShift. You will see how this takes place in the next section.

Create the openshift-client task that will deploy the image created by s2i-nodejs as a container on OpenShift:

oc create -f tektontasks/openshift-client-task.yaml

Note: For convenience, the tasks have been copied from their original locations in the Tekton and OpenShift catalogue git repositories to this repository.

You can take a look at the list of tasks using the Tekton CLI (tkn):

tkn task ls

You should see similar output to this:

NAME                  AGE
openshift-client     58 seconds ago

s2i-nodejs           3 minutes ago

In the next section, you will create a pipeline that uses s2i-nodejs and openshift-client tasks.

Step six - Create Pipeline

A Pipeline defines some tasks that should be executed and how they interact with each other via their inputs and outputs.

In this tutorial, you will create a pipeline that takes the source code of a Node.js application from GitHub and then builds and deploys it on OpenShift using s2i and Buildah.

Below is a YAML file that represents the above pipeline:

apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
  name: deploy-pipeline
  labels:
    app: tekton-workshop
spec:
  resources:
  - name: app-git
    type: git
  - name: app-image
    type: image
  tasks:
  - name: build
    taskRef:
      name: s2i-nodejs
    params:
      - name: TLSVERIFY
        value: "false"
    resources:
      inputs:
      - name: source
        resource: app-git
      outputs:
      - name: image
        resource: app-image
  - name: deploy
    taskRef:
      name: openshift-client
    runAfter:
      - build
    params:
    - name: ARGS

      value: "rollout latest nodejs-ex"

This pipeline performs the following:

  • Clones the source code of the application from a git repository (i.e. app-git resource)
  • Builds the container image using the s2i-nodejs task that generates a Dockerfile for the application and uses Buildah to build the image
  • The application image is pushed to an image registry (i.e. app-image resource)
  • The new application image is deployed on OpenShift using the openshift-client task

The pipeline definition above shows how tasks are added to a pipeline. Each pipeline has a tasks property. Under this property, each task has a name. For this pipeline, it has two tasks named build and deploy. The taskRef property under each task name is where the tasks you just created can be specified as part of the pipeline.

You might have noticed that there are no references to the nodejs-ex git repository and the image registry URL. That's because pipelines in Tekton are designed to be generic and reusable across environments and stages through the application's lifecycle.

Pipeline resources such as the git source repository and image registry are abstracted away from the pipeline. When triggering a pipeline, you can provide different git repositories and image registries to be used during the pipeline execution. You will do that in the next section.

For the deploy task of the pipeline, you can see under the params property that a value is passed: "rollout latest nodejs-ex". This argument is how oc rollout latest nodejs-ex will be run for the oc step of the openshift-client task. This command deploys an image called nodejs-ex that has the tag latest associated with it, meaning the most recently pushed version of the image.

The execution order of tasks is determined based on the dependencies that defined between the tasks. Those dependencies are defined via inputs and outputs as well as explicit orders that are defined via runAfter. You'll notice the deploy task above has a runAfter property specifying only to execute after the build task is complete.

The command below uses oc to take the pipeline definition from above from a local directory and then creates it in your OpenShift project. Run the command below to create the pipeline:

oc create -f pipeline/deploy-pipeline.yaml

You can see the pipeline you have created using tkn:

tkn pipeline ls

View from the Console

Now that you have created your pipeline, you can view it via the OpenShift web console. Make sure you are on the Pipelines tab main page of the web console.

Once on this page, you should see the pipeline you just created (i.e. deploy-pipeline) listed like in the image below:

No alt text provided for this image

The column Last Run indicates the last pipeline run that has occurred for deploy-pipeline. The Last Run Status displays whether a pipeline run succeeded or failed. Task Status shows the status of each task that is running as part of the deploy pipeline. Finally, Last Run Time indicates how long ago the last pipeline run for deploy-pipeline was.

By clicking on the three dots shown in the photo below to the right of the Last Run Time, you can see how you can trigger a pipeline run from the web console using the Start or Start Last Run options to start a pipeline run. The Start Last Run option is not available yet as deploy-pipeline has never been executed yet. There is also a Delete option to remove pipelines from your project namespace.

Click on the name deploy-pipeline under the Name column. This takes you to an overview page that shows more information about deploy-pipeline, including tasks on the pipeline.

This page also features tabs that show the YAML definition of the pipeline resource created; all pipeline runs for deploy-pipeline; and the ability to define parameters as well as resources for deploy-pipeline.

If you click on the Resources tab, you will see that deploy-pipeline requires two pipeline resources: app-git and app-image as shown below:

No alt text provided for this image

You will need to create these resources so that deploy-pipeline has the proper git repo input and knows where to push the resulting image for nodejs-ex.

In the next section, you will focus on creating the app-git and app-image pipeline resources.

Step seven - Create Pipeline Resources

Before we can trigger the pipeline just created, pipeline resources must be defined as inputs and outputs for the build task of the pipeline.

The build task of your pipeline takes a git repository as an input and then produces an image that is pushed to an image registry. Pipeline resources are how you can specify the specific URLs of the git repository and the image registry.

Much like tasks, these pipeline resources are reusable. The git repository pipeline resource could be used as an input to a different task on a different pipeline, and the image registry output could be used for a different image as the result of a task run.

The following pipeline resource defines the git repository and reference for the nodejs-ex application:

apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: nodejs-ex-git
  labels:
    app: tekton-workshop
spec:
  type: git
  params:
  - name: url

    value: https://github.com/sclorg/nodejs-ex

You can see above that the resource has a name (i.e. nodejs-ex-git), and, under the spec property, we define that this pipeline resource has a type of git, meaning it is a git repository.

The last property of nodejs-ex-git is params and is used to specify the URL associated with the git input.

The following defines the OpenShift internal registry for the resulting nodejs-ex image to be pushed to:

apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: nodejs-ex-image
  labels:
    app: tekton-workshop
spec:
  type: image
  params:
  - name: url

    value: image-registry.openshift-image-registry.svc:5000/lab-tekton/nodejs-ex:latest

The format follows the same structure as the git pipeline resource. The main difference is that a type of image is specified under the spec property, meaning this is an image registry that will have an image pushed to it. The URL for the registry is specified under the params property, just like with the git pipeline resource. In this case, we are using OpenShift's internal registry.

Create the above pipeline resources via the oc commands below.

Add the git repository input for the pipeline:

oc create -f resources/git-pipeline-resource.yaml

Add the registry for the image to be pushed to as an output of the build task of the pipeline:

oc create -f resources/image-pipeline-resource.yaml

You can see the pipeline resources created using tkn:

tkn resource ls

You can also get more information about the pipeline resources in your OpenShift project using the command below. Run the command below to see information about the nodejs-ex-git pipeline resource:

tkn resource describe nodejs-ex-git

You should see the name of the pipeline resource, the Namespace (i.e. your OpenShift project), it has been created in, and a PipelineResource Type of git to specify this is a git repository.

It also shows the URL of the git repository you use under the Params section of the output. Also, notice the section of the output called Secret Params. These secret params are how you can mask sensitive information associated with pipeline resources, such as a password or key.

You can also describe the nodejs-ex-image pipeline resource by running the command below:

tkn resource describe nodejs-ex-image

The primary difference you should notice is that nodejs-ex-image is of PipelineResource Type image. You should also notice the OpenShift image registry URL that is specific to the OpenShift project you are working in.

Now that pipeline resources have been specified, you can include these as part of a pipeline run that will deploy the nodejs-ex application out to OpenShift.

Step eight - Trigger a Pipeline

Now that you have created tasks, a pipeline, and pipeline resources, you are ready to trigger a pipeline to deploy the nodejs-ex application out to OpenShift. This is done by creating a PipelineRun via tkn.

The PipelineRun definition below is how you can trigger a pipeline and tie it to the git and image resources that are used for this specific invocation:

apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
  generateName: deploy-pipelinerun-
spec:
  pipelineRef:
    name: deploy-pipeline
    trigger:
      type: manual
      serviceAccount: 'pipeline'
    resources:
    - name: app-git
      resourceRef:
      name: nodejs-ex-git
    - name: app-image
      resourceRef:

      name: nodejs-ex-image

Under the spec property, you'll see the pipelineRef property where the pipeline to be used is specified. You should see the name of the pipeline you created (i.e. deploy-pipeline).

The last property of the PipelineRun of note is resources. This property is how the specific git repository and image registry URLs can be entered for the PipelineRun. You'll see the pipeline resource references you just created in the PipelineRun definition.

While learning about the resource definition behind a pipeline run is great, you do not have to define this resource yourself to trigger a pipeline run. You can create the above PipelineRun to deploy the nodejs-ex application out to OpenShift via tkn.

The tkn command below triggers a pipeline run. The -r flag allows you to specify what pipeline resources are included in a pipeline run. You can see the git and image pipeline resources you created earlier.

You can also notice the -s flag for specifying a service account. This flag is how you can add your pipeline service account to the pipeline run.

Run the command below to kick off the pipeline run:

tkn pipeline start deploy-pipeline \
-r app-git=nodejs-ex-git \
-r app-image=nodejs-ex-image \

-s pipeline

After running this command, the pipeline you created earlier is now running. Some pods get created to execute the tasks defined as part of the pipeline. After 4-5 minutes, the pipeline run should finish successfully.

Additionally, you will begin to see the pipeline run logs immediately after the pod for the first task is done initializing.

Tekton CLI Logs

The logs output tells you what tasks are running as well as what step it is running. You’ll see the output structured as [task_name : step_name]. An example from this pipeline run is below for the generate step of the build task:

[build : generate]

As these logs come in via tkn, you can see the output from the task-step combinations from the build task:

[build : generate]
...
[build : build]
...
[build : push]

...

You can also eventually see the output of the deploy task execution with its one step:

[deploy : oc]

Upon the successful completion of the pipeline run, you will see the following output from the logs:

[deploy : oc] deploymentconfig.apps.openshift.io/nodejs-ex rolled out

While the pipeline run is executing, you can take a look at how you can visualize a pipeline run through the OpenShift web console in the next section. Leave the logs running so that you can confirm the successful deployment message when the pipeline run finishes.

Step nine - Console View of Your Pipeline

Now that you have kicked off your pipeline execution, you can view it in the Pipelines tab of the OpenShift web console.

Once back in the web console, you should be on the Pipelines section. Click on the name of the pipeline you created (i.e. deploy-pipeline) under the Name column. As you might remember from earlier, clicking on the name takes you to the overview page for a pipeline that you have in an OpenShift project.

Since you now have a pipeline run for deploy-pipeline, you can now view the pipeline runs via the Pipeline Runs tab. Click on the Pipeline Runs tab to see the pipeline runs for deploy-pipeline. Make sure to select the Running filter to show pipeline runs that are currently executing as shown below:

No alt text provided for this image

These filters allow you to also filter pipeline runs by ones that have finished (i.e. Complete) and ones that have failed (i.e. Failed).

To view the current pipeline run, click on the pipeline run name under the Name column.

After clicking on the pipeline run name, you should see the tasks as part of your pipeline executing similar to what's displayed below:

No alt text provided for this image

By hovering your cursor over the build task, you can see the steps that the task will execute. You can also do this with the deploy task.

Just like you can view the logs through tkn as you did before, you can also view the logs of your pipeline run through the web console by clicking on the Logs tab of the pipeline runs page. The logs available through the web console display the executing tasks as part of your pipeline run. You should see the build task and eventually, the deploy task display on the left side of the web console under the logs tab as shown below:

No alt text provided for this image

The log output is the same as what tkn displays. Each task shows the logs of the steps executing.

To verify the pipeline run executed successfully, you can view the logs through the web console or head back to the terminal by clicking on the Terminal tab next to the Console tab.

Once you see the successful deployment verification message shown below, you can go out and see the application running on OpenShift:

[deploy : oc] deploymentconfig.apps.openshift.io/nodejs-ex rolled out

Through the web console, the log output will look like what is in the screenshot below:

No alt text provided for this image

If you do not see the message, just continue to watch the logs come in from the web console or tkn as your pipeline run finishes up.

You can confirm the successful deployment after the logs show the output above by running the tkn command you ran earlier to list the pipeline runs.

tkn pr ls

Under the STATUS column of the output, you should see Succeeded if the pipeline run was successful. You can also see how long the pipeline run took to execute under the DURATION column.

You are now ready to see your fully deployed application.

Step ten - Verify Deployment

To verify a successful deployment for nodejs-ex, head back out to the web console and click on the Topology tab on the left side of the web console. You should see something similar to what is shown in the screenshot below:

No alt text provided for this image

The Topology view of the OpenShift web console helps to show what is deployed out to your OpenShift project visually. As mentioned earlier, the dark blue lining around the nodejs-ex circle means that a container has started up and running the nodejs-ex application. By clicking on the arrow icon as shown below, you can open the URL for nodejs-ex in a new tab and see the application running.

After clicking on the icon, you should see the nodejs-ex running in a new tab.

One of the things you will notice about the application is the Request information section in the bottom right corner of your browser tab where nodejs-ex is running. This shows how the MongoDB is connected to nodejs-ex.

No alt text provided for this image

MongoDB is used to store the number of times the page is viewed. If you refresh the page, the page view count will increment to show the page was loaded again.

If you were to redeploy _nodejs-ex+ with another pipeline run, the data in the MongoDB would persist. This would allow you to make updates to the application while preserving the data that it uses.

Congratulations! You have successfully deployed your first application using OpenShift Pipelines. This wraps-up a pretty-lengthy article and kudos for following till the end.

Ahmed Azraq

Solutions Architect, Customer Success Manager at IBM

4 年

Thank you! Bookmarked and added to my to-do list!?

Dewan A.

Principal Developer Advocate @ Harness | ??: dewanahmed.com

4 年

Resources I'm using: 1. RedHat Developer YouTube videos 2. learn.openshift.com 3.?https://thenewstack.io/many-problems-jenkins-continuous-delivery/ 4.?https://jenkins.io/artwork/ for Jenkins image

要查看或添加评论,请登录

社区洞察

其他会员也浏览了