Part 2 -- Jenkins, Docker and DevOps: The Innovation Catalysts
This is the second in a series of posts about the ability to greatly accelerate software delivery pipelines - thus, also accelerating innovation - using the combination of Jenkins, Docker and continuous delivery practices.
One of the most interesting things about the Docker phenomenon is how it helps facilitate the way development and operations teams work together by changing the abstraction level from an application binary to a container level. Jenkins has a similar impact on development and operations teams. Jenkins was originally created as a CI server - a server that builds, runs tests and reports results as source code or configuration changes. Today, however, it is very commonly used to orchestrate the entire software delivery process across an organization and between teams. The introduction of the Pipeline feature in Jenkins has made it simple to define and share the processes involved in a full CD pipeline. Thus, like Docker, Jenkins has raised the abstraction level used by Jenkins users from a build job to a pipeline, and it allows those pipelines to interact with surrounding systems using a domain-specific language (DSL). At its heart, a DSL simply allows you to interact with surrounding systems using the familiar nouns and verbs those surrounding systems understand.
From a Jenkins Perspective…
One way to view Docker from a Jenkins perspective is simply as a different, improved packaging approach, in much the same way that applications have been packaged in RPMs or other mechanisms. From that standpoint, automating the package creation, update and maintenance process - particularly for complex projects - is exactly the kind of problem Jenkins was made to address.
Yet, Docker is a lot more than a packaging approach, because it exposes a toolset and API around the container. That toolset and API is constantly being extended by its growing ecosystem. Because Docker encapsulates both the application and the application’s environment or infrastructure configuration, it provides a key building block for two essential aspects of a continuous delivery pipeline:
- First, Docker makes it easier to test exactly what you deploy. Developers deliver Docker containers or elements that are consumed by other containers; IT operations then deploys those containers. The opportunity to screw up in a handoff or reassembly process is reduced or eliminated. Docker containers encourage a central tenet of continuous delivery - reuse the same binaries at each step of the pipeline to ensure no errors are introduced in the build process itself.
- Second, Docker containers provide the basis for immutable infrastructure. Applications can be added, removed, cloned and/or its constituencies can change, without leaving any residues behind. Whatever mess a failed deployment can cause is constrained within the container. Deleting and adding become so much easier that you stop thinking about how to update the state of a running application. In addition, when infrastructure can be changed (and it must change) independently of the applications that the infrastructure hosts - a very traditional line between development and operations responsibilities - there are inevitable problems. Again, Docker’s container level abstractions provide an opportunity to reduce or eliminate the exposure. This gets particularly important as enterprises move from traditional virtualization to private or public cloud infrastructure. None of these benefits brought about by the use of Docker appear magically. Your software and infrastructure still need to be created, integrated with other software, configured, updated and managed throughout their lifetimes. Docker gives you improved tools to do that, especially when combined with the power of Jenkins to automate these processes.
From a Docker Perspective…
From a Docker perspective, at a very basic level Jenkins is just another application that should be running in a Docker container. Jenkins itself needs to be updated and often run in a specific environment to be able to test properly. For example, integration tests may require access to backend systems, necessitating creation of a Docker image environment that has strict access controls and strong approval processes for updating. Environment and configuration changes should always result in a Jenkins build to produce new images.
But Jenkins is much more than that, it is the application that passes validated containers between groups in an organization. Jenkins also helps build higher level testing constructs - for example, integration tests may require access to backend systems, necessitating creation of a set of Docker Compose images that Jenkins can bring up, run tests against and bring down. Jenkins can ultimately create gates that make sure that only containers that have been properly pre-tested and pre-approved, make it to the next step. In a world where Docker containers are so easily created and multiplied, the role of such a validation agent cannot be minimized.
Taken Together: (Jenkins + Docker) = CD
Today, many Jenkins users take advantage of the combination of Docker and Jenkins to improve their CI and CD processes. They can do this because of Jenkins extensibility and the flexibility with which Docker can encapsulate deliveries. As you might expect, two of the leading proponents of using Docker and Jenkins together are: the Docker team and the Jenkins team! The Docker team uses Jenkins and Docker to test Docker. While the Jenkins team has used Jenkins to build Jenkins for a very long time, they also now use Docker as an integral part of the test and delivery process of the jenkins-ci.org web site, in combination with Puppet. Many other people have shared their experience in blogs and brief articles. As this experience has grown, CloudBees and the Jenkins community have identified some areas that would greatly improve the automation and management process when using Docker and Jenkins together – and developed solutions to address those areas. The goal has been to minimize the handcrafting and guesswork involved in figuring out how to make the best use of two tools, in combination. The new capabilities have been released as part of open source Jenkins, together with key integrations into the CloudBees Jenkins Platform. The new functionality includes:
- The ability for Jenkins to understand and use Docker-based executors, providing improved isolation and utilization
- Easy interaction with Docker image repositories, including Docker Hub, making it possible to store new images built by Jenkins, as well as load images so they can be used as part of a Jenkins job
- Rich Jenkins pipeline integration with Docker, making it possible to orchestrate builds, tests, deployments of any applications - including but not limited to Docker images - by using Docker environments
- Extension of Jenkins native fingerprinting capability to enhance tracking of Docker images across development and delivery processes, making it possible to track complete delivery processes, from code to production
The design consideration behind the Jenkins/Docker integration was to consider Docker images as a first-class citizen of any Jenkins operation, from CI activities - where Jenkins has the ability to operate on sanitized and isolated Docker images - to CD activities, where much of the work implies the delicate orchestration of multiple Docker containers along with third-party integrations for processes such as unit testing, load testing, UX testing, etc. The tight integration of Jenkins and Docker means neither feels like an add-on to the other. Instead, they form a cohesive approach to CI and CD that solves problems for both development and operations.
Stay tuned for part three!