DevOps: What is it and how is it applied in existing software?
Introduction
DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.
Benefits of DevOps
1. Plan
This is the phase that involves 'planning' and 'coding' of the software. The vision of the project is decided during the planning phase and the developers begin developing the code for the application. There are no DevOps tools that are required for planning, but there are a number of tools for maintaining the code that will see them right ahead. This phase isn't going to bother us at all because we are going to be occupied with ready software open-source projects. [2]
2. Code
2.1 Write source code
Having the business plan ready or having understand the code's concepts and patterns, you can improve the source code or add a feature from scratch. Also you can review and modify the files that next phases use (referred as DevOps tools) and customize them according to your needs. Since we are going to explore ready projects this phase is not in our back. The modifications that possibly will be done in each project will see, will be presented in the proper phase. From this part, we must keep only that we should have a pretty good understanding of code's business logic and use cases that covers.
2.2 Microservice-based vs. Monolithic Architecture
A monolithic application is built as a single unified unit while a microservices architecture is a collection of smaller, independently deployable services. Which one is right for you? It depends on a number of factors.?
Microservices may not be for everyone. A legacy monolith may work perfectly well, and breaking it down may not be worth the trouble. But as organizations grow and the demands on their applications increase, microservices architecture can be worthwhile.
To achieve a microservice-based architecture in our business case, message brokers are here to help us as they are able to transfer messages between them. A very popular example is Apache Kafka and is going to be presented in the context of this newsletter when we are going to have to analyze a microservice-based project. [3]
2.3 Source Code Management
Source code management (SCM) is used to track modifications to a source code repository. SCM tracks a running history of changes to a code base and helps resolve conflicts when merging updates from multiple contributors. SCM is also synonymous with Version control.
As software projects grow in lines of code and contributor head count, the costs of communication overhead and management complexity also grow. SCM is a critical tool to alleviate the organizational strain of growing development costs.
2.3.1 Tools
By far, the most widely used modern version control system in the world today is Git. Git is a mature, actively maintained open source project originally developed in 2005 by Linus Torvalds, the famous creator of the Linux operating system kernel. A staggering number of software projects rely on Git for version control, including commercial projects as well as open source. Developers who have worked with Git are well represented in the pool of available software development talent and it works well on a wide range of operating systems and IDEs (Integrated Development Environments).
Having a distributed architecture, Git is an example of a DVCS (hence Distributed Version Control System). Rather than have only one single place for the full version history of the software as is common in once-popular version control systems like CVS or Subversion (also known as SVN), in Git, every developer's working copy of the code is also a repository that can contain the full history of all changes.
In addition to being distributed, Git has been designed with performance, security and flexibility in mind. [4]
Many times, a git repository is located in a remote server for backup, transferring and collaborative reasons. Popular remote servers for this are:
2.3.2 Concepts
The main concept about source code management and git tool is that the source code goes through three different stages.
More about basic git commands can be found here.
Another concept that we are going to use is that when we explore a public repository and its source code we can first fork the repository so as to be able to modify files and keep changes in a repository that belongs to us. In the context of this newsletter these repositories will be shared each time we study a project.
2.4 Static Code Quality Analysis
Here you can find 6 of the best static code analysis tools. We are going to try the SonarQube. It is the tooling you need to deliver better code. With this you are able to systematically deliver code that meets high-quality standards, for every project, at every step of the workflow. [6]
It offers reports on duplicated code, coding standards, unit tests, code coverage, code complexity, comments, bugs, and security recommendations. SonarQube integrates with Eclipse, Visual Studio, Visual Studio Code, and IntelliJ IDEA development environments through the SonarLint plug-ins, and also integrates with external tools like LDAP, Active Directory, GitHub, and others. SonarQube is expandable with the use of plug-ins. [7]
3. Build
3.1 Manual Build
Code will be introduced to the project during the construction phase, and if necessary, the project will be rebuilt to accommodate the new code. This can be accomplished in a variety of ways, although GitHub or a comparable version control site is frequently used.
The developer will request the addition of the code, which will then be reviewed as necessary. The request will be approved if the code is ready to be uploaded, and the code will be added to the project. Even when adding new features and addressing bugs, this method is effective. [8]
The most simple way of building the software is to run locally the commands that build the source code accordingly to the used programming language. Many times, after this is done, the artifacts that this process creates or updates are stored in some directory of our local file system.
3.2 Docker Build
When a project needs to be dockerized in order to run it either locally with a high isolation level or importing it in a kubernetes cluster, a Dockerfile must exist which describes the steps for the docker image to be built. Moreover we must take care of environmental variables that may use and review them according to our deployment purposes. This phase can also be automated using a CI server like Jenkins that will see later.
3.3 Vulnerability Scanning
Before proceed to next phases, it's a good idea to scan our project for vulnerabilities, which describe security issues found in source code. This can be done with many command line tools such as Grype or Snyk and with UI tools like Dependency Track. This can also be automated through CI tools seeing results in their console logs.
4. Test
Software testing is the process of evaluating and verifying that a software product or application does what it is supposed to do. The benefits of testing include preventing bugs, reducing development costs and improving performance.
Though testing itself costs money, companies can save millions per year in development and support if they have a good testing technique and QA processes in place. Early software testing uncovers problems before a product goes to market. The sooner development teams receive test feedback, the sooner they can address issues such as:
4.1 Unit Testing
Validating that each software unit performs as expected. A unit is the smallest testable component of an application. Tools are language-dependent. [9]
4.2 Integration Testing
Ensuring that software components or functions operate together. Tools are language-dependent. [9]
4.3 Performance Testing
Testing how the software performs under different workloads. Load testing, for example, is used to evaluate performance under real-life load conditions. [9]
Indicative tools can be found here.
领英推荐
4.4 Smoke Testing
Smoke testing, also called build verification testing or confidence testing, is a software testing method that is used to determine if a new software build is ready for the next testing phase. This testing method determines if the most crucial functions of a program work but does not delve into finer details. [10]
Best Tools
5. Release
5.1 Update project's version (If project's dependencies needed to be updated)
The release stage is where the Ops team will confirm that the project is ready to be released and build it into the production environment. This stage is critical as it is the last stop after multiple stages for checks — like vulnerabilities and bugs — just before deployment. [12]
So in this stage we must confirm that the project in its current stage can't be more updated and with less vurnerabilities. It's possible to go back to some previous devops phase to fix the software according to this note.
6. Deploy
6.1 IaC (Infrastructure as Code)
Infrastructure as code (IaC) uses DevOps methodology and versioning with a descriptive model to define and deploy infrastructure, such as networks, virtual machines, load balancers, and connection topologies. Just as the same source code always generates the same binary, an IaC model generates the same environment every time it deploys. [13]
Popular tools are:
6.2 Docker
Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. The service has both free and premium tiers. The software that hosts the containers is called Docker Engine. It was first started in 2013 and is developed by Docker, Inc.
Docker is a tool that is used to automate the deployment of applications in lightweight containers so that applications can work efficiently in different environments.
Docker tools
With docker you can pull a image, build a image based on it, run them in a container or pushing them to a container registry.
Popular container registries are:
6.3 Kubernetes
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.
As a requirement for the continue of this newsletter you may have set up your local kubernetes cluster.
Instructions
Basic Entities
6.4 DNS & SSL Configuration
This step is somehow not mandatory. In case that you want your project go libe into the web and be visible from every internet user you need either a dedicated server or a Virtual Machine (VM) that runs on some cloud provider like Google, Microsoft or Amazon. The below process contains two discrete steps. The DNS Configuration and the HTTPS/SSL Configuration.
6.4.1 DNS Configuration
For this, there are many ready services like NoIP, ClouDNS and many more that can be found here. In most cases you have to create an account, declare your machine's IP address and define a name for that. If the service you are using is free, possibly you will have a fixed suffix in your domain name. After this setup you and all the world will be able to access your web page or application through the domain name you specified using the HTTP protocol.
6.4.2 SSL Configuration
This is a special and advanced topic that will talk about it on practice some time. Until then you can read this article.
7. Operate
7.1 CI (Continuous Integration)
Continuous integration is a DevOps software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. Continuous integration most often refers to the build or integration stage of the software release process and entails both an automation component (e.g. a CI or build service) and a cultural component (e.g. learning to integrate frequently). The key goals of continuous integration are to find and address bugs quicker, improve software quality, and reduce the time it takes to validate and release new software updates. [18]
Popular tool for this is Jenkins, but there are also many others and can be found here. The community is usually using Jenkins to build and push docker images to a container registry starting from source code checkout and validations. Pipelines describe the process that each time is needed.
7.2 CD (Continuous Delivery)
Continuous deployment (CD, or CDE) is a strategy or methodology for software releases where any new code update or change made through the rigorous automated test process is deployed directly into the live production environment, where it will be visible to customers.
The goal of a continuous deployment process is simple: minimize the cycle time required to write a piece of code, test it to ensure that it functions correctly and does not break the application, deploy it to the live environment and collect feedback on it from users. [19]
A very popular tool for this is ArgoCD. Argo CD is implemented as a kubernetes controller which continuously monitors running applications and compares the current, live state against the desired target state (as specified in the Git repo). A deployed application whose live state deviates from the target state is considered OutOfSync . [20]
The community uses it mostly to pull docker images and deploy them in a kubernetes cluster in an automated way.
8. Monitor
8.1 Logging Tools
Some of these tools can be found here.
8.2 Monitoring Tools
Some of these tools can be found here.
Sources: