Azure DevOps
Fotios Tragopoulos
Cloud Engineering Manager | IT Strategy | Technology Transformation | Engineering, AI & Data
DevOps?practices?include?agile?planning,?continuous?integration,?continuous?delivery,?and?monitoring?of?applications.?When?you?adopt?DevOps?practices,?you?shorten?your?cycle?time?by?working?in?smaller?batches,?using?more?automation,?hardening?your?release?pipeline,?improving?your?telemetry,?and?deploying?more?frequently.
DevOps?transformations?need?to?have?a?clearly?defined?set?of?measurable?outcomes.
There is a common misconception that DevOps suits new projects better than existing projects, but this is not the case. Existing projects come with existing code bases, existing teams, and often a great amount of technical debt. When you spend your time maintaining existing applications, you have limited ability to work on new code. DevOps is the way to reduce that time and to make software releases less risky. A DevOps transformation can provide that.
Transformation Planning
In a DevOps transformation users are often categorized into three buckets:
Metrics and KPIs
While there is no specific list of metrics and KPIs that apply to all DevOps projects, the following are commonly used:
Agile Methodology
A brief comparison between the agile and waterfall methodology
The Agile Alliance have published the?Manifesto for Agile Software Development?and from that they have distilled the?12 Principles Behind the Agile Manifesto.
Azure DevOps SaaS Portal
Azure DevOps is a Software as a service (SaaS) platform that provides an end-to-end DevOps toolchain for developing and deploying software. It includes:
Azure DevOps is not the only DevOps SaaS provided by Microsoft. GitHub is a similar platform which has in it's offering
GitHub also offers a?CLI.
Source Control and Azure Repos
Source control management (SCM) systems provide a running history of code development and help to resolve conflicts when merging contributions from multiple sources. Some of the advantages using source control are:
Some types of SCMs are:
Azure Repos?offers Git and TFVC source control systems and in clude in it's offering:
GitHub?is a Git repository hosting service, but it adds many of its own features like CI/CD using GitHub Actions, alerts about vulnerabilities in your code, automatic updates for vulnerabilities, semantic code analysis, accidentally token commit protection, review tools, protected branches, Git LFS for large files, documentation with GitHub Pages, project management tools, team administration, codes of conduct and pre-written license for the repository.
Azure Boards has direct integration with Azure Repos, but it can also be integrated with GitHub.
Code Quality & Technical Debt
The quality of code cannot be measured subjectively. There are five key traits to measure for higher quality.
Complexity metrics
They can help in measuring quality.?Cyclomatic?complexity measures of the number of linearly independent paths through a program’s source code. Another way to understand quality is through calculating?Halstead?complexity measures. Some of the metrics are program vocabulary, program length, calculated program length, volume, difficulty and effort. Code analysis tools can be used to check for considerations such as security, performance, interoperability, language usage, globalization, and should be part of every developer’s toolbox and software build process.
Quality metrics
Git for Enterprise DevOps
There are two philosophies on how to organize your repos?Monorepo?(all the source code in a single repository) or?Multiple repos. Their balance is between fast onboarding and developer permissions. Most common branch workflows are:
Continuous delivery demands a significant level of automation. Git gives you the ability to automate most of the checks in your code base even before committing the code into your local repository let alone the remote. Git hooks are a mechanism that allows arbitrary code to be run before, or after, certain Git lifecycle events occur. Git hooks allow you to run custom scripts whenever certain important events occur in the Git life cycle, such as committing, merging, and pushing. The only criteria is that hooks must be stored in the .git/hooks folder in the repo root, and they must be named to match the corresponding events:
Azure Pipelines
The core idea is to create a repeatable, reliable, and incrementally improving process for taking software from concept to customer. Azure Pipelines is a fully featured service that is mostly used to create cross platform CI/CD (Continuous Integration - Continuous Deployment). It works with all the Git providers and can deploy to most major cloud services.
Continuous integration?is used to automate tests and builds for your project and catch bugs or issues early in the development cycle when they're easier and faster to fix.
Artifacts?are produced from CI systems and is a collection of code and binaries.
Continuous delivery?or?Continuous Deployment?is used to automatically deploy and test code in multiple stages to help drive quality.
An?Agent?is installable software that runs a build and/or deployment job. If your pipelines are in Azure Pipelines, then you've got a convenient option to build and deploy using a Microsoft-hosted agent. Each time a pipeline is run, a fresh VM is provided.
Build?is one execution of a pipeline.
A?deployment target?is a VM, container or any service that's used to host the application being developed. A pipeline might deploy the app to one or more deployment targets.
A?job?represents an execution boundary of a set of steps. All the steps run together on the same agent. A build contains one or more jobs. Most jobs run on an agent.
A?pipeline?defines the CI/CD process for your app. It can be thought of as a script that defines how your test, build, and deployment steps are run.
Azure Pipelines offer a free tier for public projects if your pipeline is part of an Azure Pipelines public project and your pipeline builds a public repository from GitHub or from the same public project in your Azure DevOps organization.
Azure Pipelines have a visual designer which is great for users who are new to CI/CD. Mirroring the rise of interest in infrastructure as code, there has been a considerable interest in defining pipelines as code. A typical microservice architecture will require many deployment pipelines that are for the most part identical. When you use YAML, you define your pipeline in code alongside the rest of the code for your app. The benefits of using YAML are that the pipeline is versioned with your code and follows the same branching structure. You get validation of your changes through code reviews in pull requests and branch build policies.
Continuous Integration
The idea is to minimize the cost of integration by making it an early consideration. The end goal of CI is to make integration a simple, repeatable process that is part of the everyday development workflow to reduce integration costs and respond to defects early. Continuous integration relies on four key elements for successful implementation: a Version Control System, Packet Management System, Continuous Integration System, and an Automated Build Process.
Azure Pipelines?can automatically build and validate every pull request and commit to your Azure Repos Git repository. Azure Pipelines can be used with Azure DevOps public and private projects. Most pipelines will have Name (the build number format. If you do not explicitly set a name format, you’ll get an integer number), Trigger, Variables, Job (a set of steps that are executed by an agent in a queue), Pool, Checkout snd Steps.
Stage in a pipeline is a collection of related jobs. This example runs three stages, one after another. The middle stage runs two jobs in parallel.
stages:
- stage: Build
jobs:
- job: BuildJob
steps:
- script: echo Building
- stage: Test
jobs:
- job: Test on iOS
steps:
- script: echo Testing iOS
- job: Test on Android
steps:
- script: echo Testing Android
- stage: Deploy
jobs:
- job: Deploy
steps:
- script: echo Deploying
You can export reusable sections of your pipeline as a template. Azure Pipelines supports a maximum of 50 unique template files in a single pipeline.
Security, Secrets and Application Configuration
Security cannot be a separate department and cannot be added at the end of a project. Security must be part of DevOps, together they are called?DevSecOps. DevSecOps incorporates the security team and their capabilities into your DevOps practices making security a responsibility of everyone on the team. Security is everyone’s responsibility and needs to be looked at holistically across the application life cycle.
Threat modeling?is a core element of the Security Development Lifecycle (SDL). It’s an engineering technique to identify threats, attacks, vulnerabilities, and countermeasures that could affect your application. You can use threat modeling to shape your application's design, meet your company's security objectives, and reduce risk.
There are five major threat modeling steps:
The Microsoft Threat Modeling Tool makes threat modeling easier for all developers through a standard notation for visualizing system components, data flows, and security boundaries. The Threat Modeling Tool enables software architects to communicate about the security design of their systems, analyze those designs for potential security issues, suggest and manage mitigations for security issues.
Continuous security validation should be added at each step from development through production. Validation in the CI/CD begins before the developer commits code. Static code analysis tools in the IDE provide the first line of defense to help ensure that security vulnerabilities are not introduced into the CI/CD process. The process for committing code into a central repository should have controls to help prevent security vulnerabilities. Using Git source control in Azure DevOps with branch policies provides a gated commit experience that can provide this validation. The CI builds should run static code analysis tests to ensure that the code is following all rules for both maintenance and security.
One of the key reasons we would want to move the configuration away from source control is to delineate responsibilities. Commonly known as Separation of concerns.
External configuration store patterns store the configuration information in an external location and provide an interface that can be used to quickly and efficiently read and update configuration settings. In a cloud computing, it's a cloud-based storage service but could be a hosted database or other system as well. This pattern is useful for configuration settings that are shared between multiple applications and application instances, as a complementary store for some of the settings for applications, or to simplify administration of multiple applications.?Azure Key Vault?allows you to manage your organization's secrets and certificates in a centralized repository. The secrets and keys are further protected by Hardware Security Modules (HSMs). It can be used for secrets, key and certificate management. When using Key Vault application developers no longer need to store security information in their application. Access to a key vault requires proper authentication and authorization before a user or application can get access. Authentication is done via Azure Active Directory.
GitHub Actions
They can be used for a wide variety of tasks like automated testing, automatically responding to new issues or mentions, triggering code reviews, handling pull requests and branch management. They are defined in YAML and reside within GitHub repositories.
GitHub tracks events that trigger the start of workflows. Workflows are the unit of automation and they contain Jobs. Jobs use Actions to get work done.
This workflow will build and push a node.js application to an Azure Web App when a release is created:
on:
release:
types: [created]
env:
AZURE_WEBAPP_NAME: your-app-name
AZURE_WEBAPP_PACKAGE_PATH: '.'
NODE_VERSION: '12.x'
jobs:
build-and-deploy:
name: Build and Deploy
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v2
- name: Use Node.js ${{ env.NODE_VERSION }}
uses: actions/setup-node@v2
with:
node-version: ${{ env.NODE_VERSION }}
- name: npm install, build, and test
run: |
# Build and test the project, then
# deploy to Azure Web App.
npm install
npm run build --if-present
npm run test --if-present
- name: 'Deploy to Azure WebApp'
uses: azure/webapps-deploy@v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
Workflows include several standard syntax elements.
name?- optional but highly recommended. It appears in several places within the GitHub UI.
on?- is the event or list of events that will trigger the workflow.
jobs?- is the list of jobs to be executed.
runs-on?- tells Actions which runner to use.
steps?- is the list of steps for the job
uses?- tells Actions which predefined action needs to be retrieved, e.g. an action that installs node.js.
run?- tells the job to execute a command on the runner, e.g. to execute an NPM command.
GitHub provides several hosted runners, to avoid you needing to spin up your own infrastructure to run actions. For JavaScript code, you have implementations of node.js on Windows, MacOS and Linux. If you need to use other languages, a Docker container can be used. At present, the Docker container support is only Linux based. If you need different configurations to the ones provided, you can create a self-hosted runner. GitHub also provides a series of built-in environment variables.
It's important to follow best practices when creating actions:
Secrets are similar to environment variables but encrypted. They can be created at the Repository or the Organization level. If secrets are created at the organization level, access policies can be used to limit the repositories that can use them.
Dependency Management Strategy
It is essential that the software dependencies that are introduced in a project and solution can be properly declared and resolved. There are many aspects of a dependency management strategy.
领英推荐
The goal is to reduce the size of your own codebase and system. You achieve this by removing certain components of your solution. These are going to be centralized, reused, and maintained independently. You will remove those components and externalize them from your solution at the expense of introducing dependencies on other components.
To identify the dependencies in your codebase you can scan your code for patterns and reuse, as well as analyzing how the solution is composed of individual modules and components.
At this point I will assume that you are already familiar with package management. How to package dependencies and the various packaging formats, feeds, sources, and package managers. Most package management system provide feeds where you can request packages to install in your applications. In Azure Artifacts, you can have multiple feeds in your projects, which are always private. It is recommended that you create one feed per type, this way it's clear what the feed contains. Each feed can contain one or more upstream and can manage its own security. Azure Artifacts has four different roles for protecting package feeds:
Continuous Delivery
The need to deliver fast, with high quality and cheap software production guided us to Continuous Delivery. Value should flow through our pipelines and piled up and released at once. To explain CD a bit more, these are the eight principles of continuous delivery:
The best way to move your software to production safely while maintaining stability is by separating your functional release from your technical release (deployment).
In order to deploy multiple times a day, everything needs to be automated and as such tests needs to run every time a new release is created. Instead of automating all your manual tests into automated UI tests, you need to rethink your testing strategy. Tests can be divided in 4 categories.
Big monolithic applications are more difficult to deliver. Every part that is changed might impact other parts that did not change. Breaking up your software into smaller, independent pieces, is in many cases a good solution. One approach to solving these issues is to implement microservices.
Microservices architecture
A microservice is an autonomous, independent deployable, and scalable software component. They are small, and they are focused on doing one thing very well, and they can run autonomously.You need to create to keep track of interfaces and how they interact with each other. And you need to maintain multiple application lifecycles and Continuous Delivery pipelines.
The traditional or classical deployment pattern was moving your software to a development stage, a testing stage, maybe an acceptance or staging stage, and finally a production stage. End users always use your application differently. Unexpected events will happen in a data center, multiple events from multiple users will occur at the same time, triggering some code that has not been tested in that way. To overcome this, we need to embrace the fact that some features can only be tested in production. Some modern deployment patterns that managing testing in production are:
Infrastructure as Code
IaC is the concept of managing your operations environment in the same way you do applications or other code. Rather than manually making configuration changes or using one-off scripts to make infrastructure adjustments, the operations infrastructure is managed instead using the same rules and strictures that govern code development. Benefits of infrastructure as code are:
There are several approaches that you can adopt to implement IaC and CaC. Two of the main methods of approach are the declarative (states what the final state should be) and the imperative (the script states the how for the final state of the machine by executing the steps to get to the finished state).
Idempotence?is a mathematical term that can be used in the context of?Infrastructure as Code?and?Configuration as Code. It is the ability to apply one or more operations against a resource, resulting in the same outcome. If you apply a deployment to a set of resources 100 times, you should end up with the same result after each application of the script or template.
Using?Resource Manager templates?will make your deployments faster and more repeatable.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "",
"parameters": {},
"variables": {},
"functions": [],
"resources": [],
"outputs": {}
}
Modularize templates
When using Azure Resource Manager templates, a best practice is to modularize them by breaking them out into the individual components. The primary methodology to use to do this is by using linked templates.
"resources": [
{
"apiVersion": "2021-05-25",
"name": "linkTemplate",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
Link_To_External_Template
}
}
]
You can also nest a template within the main template
"resources": [
{
"apiVersion": "2021-05-25",
"name": "NestedTemplate",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0",
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('storageName')]",
"apiVersion": "2021-05-25",
"location": "Central EU",
"properties": {
"accountType": "Standard_LRS"
}
}
]
}
}
}
]
Deployments modes
There are three options for deployments
Azure Automation
It is an Azure service that provides a way for users to automate the manual, long-running, error-prone, and frequently repeated tasks that are commonly performed in a cloud and enterprise environment. Azure Automation saves time and increases the reliability of regular administrative tasks. You can even schedule the tasks to be performed automatically at regular intervals. You can automate processes using runbooks or automate configuration management by using Desired State Configuration (DSC).
Some Azure Automation capabilities are:
Desired State Configuration?is a configuration management approach that you can use for configuration, deployment, and management of systems to ensure that an environment is maintained in a state that you specify and doesn't deviate from that state.
DSC consists of three primary components:
There are two methods of implementing DSC:
3rd Party IaC Tools in Azure
Configuration management tools enable changes and deployments to be faster, repeatable, scalable, predictable, and able to maintain the desired state. Some advantages of using configuration management tools include Adherence to coding conventions, idempotency (the end state remains the same no matter how many times the code is executed) and distribution design to improve managing large numbers of remote servers.
Chef
Chef Infra?helps you to manage your infrastructure in the cloud, on-premises, or in a hybrid environment by using instructions (or recipes) to configure nodes. A node , or chef-client is any physical or virtual machine (VM), cloud, or network device that is under management by Chef Infra.
Chef Infra has 3 main architectural components:
Chef Infra also uses concepts called cookbooks and recipes which are essentially the policies that you define and apply to your servers. You can deploy Chef on Microsoft Azure from the Azure Marketplace using the Chef Automate image.
Puppet
Puppet?is a deployment and configuration management toolset that provides you with enterprise tools that you need to automate the entire lifecycle of your Azure infrastructure. It provides a series of open-source configuration management tools and projects, and a configuration management platform that allows you to maintain state in both your infrastructure and application deployments.
Puppet consists of the following components:
Ansible
Ansible?is an open-source platform by Red Hat that automates cloud provisioning, configuration management, and application deployments. Using Ansible, you can provision your entire cloud infrastructure. In addition to provisioning and configuring applications and their environments, Ansible enables you to automate deployment and configuration of resources in your environment such as virtual networks, storage, subnets, and resources groups. With Ansible you don't have to install software on the managed machines.
Ansible models your IT infrastructure by describing how all your systems interrelate, rather than just managing one system at a time. The core components of Ansible are:
Terraform
HashiCorp Terraform?is an open-source tool that allows you to provision, manage, and version cloud infrastructure. It codifies infrastructure in configuration files that describes the topology of cloud resources such as VMs, storage accounts, and networking interfaces. Terraform's CLI provides a simple mechanism to deploy and version the configuration files to Azure or any other supported cloud service. The CLI also allows you to validate and preview infrastructure changes before you deploy them.
Some of Terraform’s core components include:
Containers and Docker
Virtual Machines provides hardware virtualization and containers provides operating-system-level virtualization by abstracting the user space and not the entire operating system. The operating system level architecture is being shared across containers. This is what makes containers so lightweight. Containers are portable and allow you to have a consistent development environment. A container is a thing that runs a little program package, while Docker is the container runtime and orchestrator.
Containers are a solution to the problem of how to get software to run reliably when moved from one computing environment to another. A container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerizing the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.
Containers become very compelling when it comes to Microservices. Microservices is an approach to application development where every part of the application is deployed as a fully self-contained component, that can be individually scaled and updated. In production you might scale out to different numbers of instances across a cluster of servers depending on their resource demands as customer request levels rise and fall. The namespace and resource isolation of containers prevents one microservice instance from interfering with others and use of the Docker packaging format and APIs unlocks the Docker ecosystem for the microservice developer and application operator. With a good microservice architecture you can solve the management, deployment, orchestration, and patching needs of a container-based service with reduced risk of availability loss while maintaining high agility.
Azure provides a wide range of services that help you to work with containers:
Azure Kubernetes Service (AKS)
Kubernetes is a cluster orchestration technology owned by the Cloud Native Computing Foundation and it is open source. AKS makes it quicker and easier to deploy and manage containerized applications without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand without taking applications offline. It manages health monitoring and maintenance, Kubernetes version upgrades, and patching.
Implementing Software Feedback
Deploying code into production and doing a health check is not enough. We are now looking beyond this point and continue to monitor how it runs. Getting feedback about what happens after the software is deployed to stay competitive and make our system better is essential. The right feedback loop must be fast, relevant, accessible and actionable. Engineering teams need to set action rules and own the complete code quality. Feedback is fundamental not only to DevOps practice but throughout the SDLC process.
Continuous Monitoring
Continuous monitoring builds on the concepts of CI/CD and it refers to the process and technology required to incorporate monitoring across each phase of your DevOps and IT operations lifecycles. It helps to continuously ensure the health, performance, and reliability of your application and infrastructure as it moves from development to production.
Feedback Mechanisms
Engaging customers throughout your product lifecycle is a primary Agile principle. Each team needs to interact directly with customers on the feature sets they own.
Site Reliability Engineering (SRE)
It empowers software developers to own the ongoing daily operation of their applications in production. The goal is to bridge the gap between the development team that wants to ship things as fast as possible and the operations team that doesn’t want anything to blow up in production. A key skill of a software reliability engineer is that they have a deep understanding of the application, the code, and how it runs, is configured, scales and monitoring.
Some of the typical responsibilities of a site reliability engineer are:
Both SRE and DevOps are methodologies addressing organizations’ needs for production operation management. DevOps raise problems and dispatch them to Dev to solve, the SRE approach is to find problems and solve some of them themselves. DevOps practices can help ensure IT helps rack, stack, configure, and deploy the servers and applications. The site reliability engineers can then handle the daily operation of the applications.
DevSecOps
If you want to take full advantage of the agility and responsiveness of a DevOps approach, IT security must also play an integrated role in the full life cycle of your apps. DevSecOps means thinking about application and infrastructure security from the start. It also means automating some security gates to keep the DevOps workflow from slowing down. Two features of DevSecOps pipelines that are not found in standard DevOps pipelines are:
Azure Security Center
It is a monitoring service that provides threat protection across all your services. Security Center can:
Open-Source Software
The concerns of the use of Open-Source components are that source code can be of low-quality, have no active maintenance, contain malicious code, have security vulnerabilities, have unfavorable licensing restrictions. The starting point for secure development is to use secure coding practices. OWASP regularly publish a set of Secure Coding Practices. Their guidelines currently cover advice in the following areas:
As the dependency on these third-party Open-Source software components increases, the risk of security vulnerabilities or hidden license requirements also increases compliance issues. Identifying such issues early in the release cycle gives you an advanced warning and allows you enough time to fix the issues. There are many available tools that can scan for these vulnerabilities within the build and release pipelines like:
Conclusion
Software and the Internet have transformed the world and its industries, from shopping to entertainment to banking. Software no longer supports a business, rather it becomes an integral component of every part of a business. Companies interact with their customers through software delivered as online services or applications and on all sorts of devices. They also use software to increase operational efficiencies by transforming every part of the value chain.
In the same way that companies transformed how they design, build, and deliver products using industrial automation, companies in today’s world must transform how they build and deliver software.
I hope you've enjoyed reading this article as much as I've enjoyed writing it. Feel free to share it.