Azure DevOps

Azure DevOps

DevOps?practices?include?agile?planning,?continuous?integration,?continuous?delivery,?and?monitoring?of?applications.?When?you?adopt?DevOps?practices,?you?shorten?your?cycle?time?by?working?in?smaller?batches,?using?more?automation,?hardening?your?release?pipeline,?improving?your?telemetry,?and?deploying?more?frequently.

DevOps?transformations?need?to?have?a?clearly?defined?set?of?measurable?outcomes.

  • Reduce the time spent on fixing bugs by 60%.
  • Reduce the time spent on unplanned work by 70%.
  • Reduce the out-of-hours work required by staff to no more than 10% of total working time.
  • Remove all direct patching of production systems.

There is a common misconception that DevOps suits new projects better than existing projects, but this is not the case. Existing projects come with existing code bases, existing teams, and often a great amount of technical debt. When you spend your time maintaining existing applications, you have limited ability to work on new code. DevOps is the way to reduce that time and to make software releases less risky. A DevOps transformation can provide that.

  1. CI?- Continuous Integration drives the ongoing merging and testing of code, which leads to finding defects early.
  2. CD?- Continuous Delivery to production and testing environments helps to quickly fix bugs.
  3. Version Control?- Enables teams to communicate and collaborate effectively, as well as to integrate with tools for monitoring and deployments.
  4. Agile planning and lean project management techniques?- Plan and isolate work into sprints, manage team capacity, and adapt quickly to changes.
  5. Definition of Done?- Working software and collecting telemetry.

No alt text provided for this image

  1. Monitoring and Logging?- of application health and customer usage, helps form an hypothesis and quickly validate or disprove strategies.
  2. Public and Hybrid Clouds?- removed traditional bottlenecks. Whether you use IaaS to lift and shift your existing apps, or PaaS to gain unprecedented productivity, the cloud gives you a datacenter without limits.
  3. IaC?- Infrastructure as Code enables the automation and validation of creation and teardown of environments with delivering secure and stable application hosting platforms.
  4. Microservices?- isolates business use cases into small reusable services that communicate via interface contracts which enables scalability and efficiency.

Transformation Planning

In a DevOps transformation users are often categorized into three buckets:

  • Canaries?- who voluntarily test bleeding edge features as soon as they are available.
  • Early adopters?- who voluntarily preview releases, considered more refined than the code that canary users are exposed to.
  • Users?who consume the products, after passing through canaries and early adopters.

Metrics and KPIs

While there is no specific list of metrics and KPIs that apply to all DevOps projects, the following are commonly used:

  1. Faster Outcomes

  • Deployment Frequency
  • Deployment Speed
  • Deployment Size
  • Lead Time

  1. Efficiency

  • Server to Admin Ratio
  • Staff Member to Customers Ratio
  • Application Usage
  • Application Performance

  1. Quality and security

  • Deployment failure rates
  • Application failure rates
  • Mean time to recover
  • Bug report rates
  • Test pass rates
  • Defect escape rate
  • Availability
  • Service level agreement achievement
  • Mean time to detection

  1. Culture

  • Employee morale
  • Retention rates

Agile Methodology

A brief comparison between the agile and waterfall methodology

No alt text provided for this image

The Agile Alliance have published the?Manifesto for Agile Software Development?and from that they have distilled the?12 Principles Behind the Agile Manifesto.

Azure DevOps SaaS Portal

Azure DevOps is a Software as a service (SaaS) platform that provides an end-to-end DevOps toolchain for developing and deploying software. It includes:

  • Azure Boards?- agile planning, work item tracking, visualisation and reporting tool
  • Azure Pipelines?- a CI/CD platform with support for containers and Kubernetes
  • Azure Repos?- cloud-hosted private git repositories
  • Azure Artifacts?- integrated package management with support for Maven, npm, Python and NuGet package feeds
  • Azure Test Plans?- integrated planned and exploratory testing solution

Azure DevOps is not the only DevOps SaaS provided by Microsoft. GitHub is a similar platform which has in it's offering

  • Codespaces?- a cloud-hosted development environment
  • Repos?- public and private repositories
  • Actions?- for the creation of automation workflows which can include environment variables and customized scripts
  • Artifacts
  • Security?- through code scanning and review features, including automated code review assignment

GitHub also offers a?CLI.

Source Control and Azure Repos

Source control management (SCM) systems provide a running history of code development and help to resolve conflicts when merging contributions from multiple sources. Some of the advantages using source control are:

  • Create workflows?- prevent the chaos of everyone using their own development process with different and incompatible tools and provide process enforcement and permissions
  • Work with versions?- code stored in versions can be viewed and restored from version control at any time as needed. This makes it easy to base new work off any version of code
  • Collaboration?- help to avoid, resolve and prevent conflicts, even when people make changes at the same time.
  • Maintains history of changes?- history can be reviewed to find out who, why, and when changes were made. History gives you the confidence to experiment since you can roll back to a previous good version at any time.
  • Automate tasks?- save time and generate consistent results. Automate testing, code analysis and deployment when new versions are saved to version control.

Some types of SCMs are:

  • Centralised?- based on the idea that there is a single central copy of your project and developers will commit their changes to this central copy
  • Distributed (DVCS)?- every developer clones a copy of a repository and has the full history of the project on their own hard drive. Git and TFVC are some examples of distributed source control systems

Azure Repos?offers Git and TFVC source control systems and in clude in it's offering:

  • Free unlimited private Git repositories
  • Support for any Git client
  • Web hooks and API integration
  • Semantic code search (code-aware search)
  • Built-in CI/CD to automatically trigger builds, tests and deployments with every completed pull request using Azure pipelines or your tools

GitHub?is a Git repository hosting service, but it adds many of its own features like CI/CD using GitHub Actions, alerts about vulnerabilities in your code, automatic updates for vulnerabilities, semantic code analysis, accidentally token commit protection, review tools, protected branches, Git LFS for large files, documentation with GitHub Pages, project management tools, team administration, codes of conduct and pre-written license for the repository.

Azure Boards has direct integration with Azure Repos, but it can also be integrated with GitHub.

Code Quality & Technical Debt

The quality of code cannot be measured subjectively. There are five key traits to measure for higher quality.

  • Reliability?- measures the probability that a system will run without failure over a specific period of operation
  • Maintainability?- measures how easily software can be maintained
  • Testability?- measures how well the software supports testing efforts
  • Portability?- measures how usable the same software is in different environments
  • Reusability?- measures whether existing assets can be used again

Complexity metrics

They can help in measuring quality.?Cyclomatic?complexity measures of the number of linearly independent paths through a program’s source code. Another way to understand quality is through calculating?Halstead?complexity measures. Some of the metrics are program vocabulary, program length, calculated program length, volume, difficulty and effort. Code analysis tools can be used to check for considerations such as security, performance, interoperability, language usage, globalization, and should be part of every developer’s toolbox and software build process.

Quality metrics

  • Failed builds percentage
  • Failed deployments percentage
  • Ticket volume?- the overall volume of?customer?bug tickets
  • Bug bounce percentage?- the percentage of customer or bug tickets are being re-opened
  • Unplanned work percentage?- the percentage of the overall work being performed is unplanned

Git for Enterprise DevOps

There are two philosophies on how to organize your repos?Monorepo?(all the source code in a single repository) or?Multiple repos. Their balance is between fast onboarding and developer permissions. Most common branch workflows are:

  • Trunk-based development?- Branch per feature with the Master branch to never contain broken code
  • GitFlow?- defines a strict branching model designed around the project release.
  • Forking?- it gives every developer a server-side repository

Continuous delivery demands a significant level of automation. Git gives you the ability to automate most of the checks in your code base even before committing the code into your local repository let alone the remote. Git hooks are a mechanism that allows arbitrary code to be run before, or after, certain Git lifecycle events occur. Git hooks allow you to run custom scripts whenever certain important events occur in the Git life cycle, such as committing, merging, and pushing. The only criteria is that hooks must be stored in the .git/hooks folder in the repo root, and they must be named to match the corresponding events:

  • applypatch-msg
  • pre-applypatch
  • post-applypatch
  • pre-commit
  • prepare-commit-msg
  • commit-msg
  • post-commit
  • pre-rebase
  • post-checkout
  • post-merge
  • pre-receive
  • update
  • post-receive
  • post-update
  • pre-auto-gc
  • post-rewrite
  • pre-push

Azure Pipelines

The core idea is to create a repeatable, reliable, and incrementally improving process for taking software from concept to customer. Azure Pipelines is a fully featured service that is mostly used to create cross platform CI/CD (Continuous Integration - Continuous Deployment). It works with all the Git providers and can deploy to most major cloud services.

Continuous integration?is used to automate tests and builds for your project and catch bugs or issues early in the development cycle when they're easier and faster to fix.

Artifacts?are produced from CI systems and is a collection of code and binaries.

Continuous delivery?or?Continuous Deployment?is used to automatically deploy and test code in multiple stages to help drive quality.

An?Agent?is installable software that runs a build and/or deployment job. If your pipelines are in Azure Pipelines, then you've got a convenient option to build and deploy using a Microsoft-hosted agent. Each time a pipeline is run, a fresh VM is provided.

Build?is one execution of a pipeline.

A?deployment target?is a VM, container or any service that's used to host the application being developed. A pipeline might deploy the app to one or more deployment targets.

A?job?represents an execution boundary of a set of steps. All the steps run together on the same agent. A build contains one or more jobs. Most jobs run on an agent.

A?pipeline?defines the CI/CD process for your app. It can be thought of as a script that defines how your test, build, and deployment steps are run.

Azure Pipelines offer a free tier for public projects if your pipeline is part of an Azure Pipelines public project and your pipeline builds a public repository from GitHub or from the same public project in your Azure DevOps organization.

Azure Pipelines have a visual designer which is great for users who are new to CI/CD. Mirroring the rise of interest in infrastructure as code, there has been a considerable interest in defining pipelines as code. A typical microservice architecture will require many deployment pipelines that are for the most part identical. When you use YAML, you define your pipeline in code alongside the rest of the code for your app. The benefits of using YAML are that the pipeline is versioned with your code and follows the same branching structure. You get validation of your changes through code reviews in pull requests and branch build policies.

Continuous Integration

The idea is to minimize the cost of integration by making it an early consideration. The end goal of CI is to make integration a simple, repeatable process that is part of the everyday development workflow to reduce integration costs and respond to defects early. Continuous integration relies on four key elements for successful implementation: a Version Control System, Packet Management System, Continuous Integration System, and an Automated Build Process.

Azure Pipelines?can automatically build and validate every pull request and commit to your Azure Repos Git repository. Azure Pipelines can be used with Azure DevOps public and private projects. Most pipelines will have Name (the build number format. If you do not explicitly set a name format, you’ll get an integer number), Trigger, Variables, Job (a set of steps that are executed by an agent in a queue), Pool, Checkout snd Steps.

Stage in a pipeline is a collection of related jobs. This example runs three stages, one after another. The middle stage runs two jobs in parallel.

stages:
- stage: Build
  jobs:
  - job: BuildJob
    steps:
    - script: echo Building
- stage: Test
  jobs:
  - job: Test on iOS
    steps:
    - script: echo Testing iOS
  - job: Test on Android
    steps:
    - script: echo Testing Android
- stage: Deploy
  jobs:
  - job: Deploy
    steps:
    - script: echo Deploying        

You can export reusable sections of your pipeline as a template. Azure Pipelines supports a maximum of 50 unique template files in a single pipeline.

Security, Secrets and Application Configuration

Security cannot be a separate department and cannot be added at the end of a project. Security must be part of DevOps, together they are called?DevSecOps. DevSecOps incorporates the security team and their capabilities into your DevOps practices making security a responsibility of everyone on the team. Security is everyone’s responsibility and needs to be looked at holistically across the application life cycle.

Threat modeling?is a core element of the Security Development Lifecycle (SDL). It’s an engineering technique to identify threats, attacks, vulnerabilities, and countermeasures that could affect your application. You can use threat modeling to shape your application's design, meet your company's security objectives, and reduce risk.

There are five major threat modeling steps:

  1. Defining security requirements.
  2. Creating an application diagram.
  3. Identifying threats.
  4. Mitigating threats.
  5. Validating that threats have been mitigated.

The Microsoft Threat Modeling Tool makes threat modeling easier for all developers through a standard notation for visualizing system components, data flows, and security boundaries. The Threat Modeling Tool enables software architects to communicate about the security design of their systems, analyze those designs for potential security issues, suggest and manage mitigations for security issues.

Continuous security validation should be added at each step from development through production. Validation in the CI/CD begins before the developer commits code. Static code analysis tools in the IDE provide the first line of defense to help ensure that security vulnerabilities are not introduced into the CI/CD process. The process for committing code into a central repository should have controls to help prevent security vulnerabilities. Using Git source control in Azure DevOps with branch policies provides a gated commit experience that can provide this validation. The CI builds should run static code analysis tests to ensure that the code is following all rules for both maintenance and security.

One of the key reasons we would want to move the configuration away from source control is to delineate responsibilities. Commonly known as Separation of concerns.

  • Configuration custodian: Responsible for generating and maintaining the life cycle of configuration values.
  • Configuration consumer: Responsible for defining the schema (loose term) for the configuration that needs to be in place and then consuming the configuration values in the application or library code.
  • Configuration store: The underlying store that is leveraged to store the configuration.
  • Secret store: While you can store configuration and secrets together, it violates our separation of concern principle, so the recommendation is to leverage a separate store for persisting secrets.

External configuration store patterns store the configuration information in an external location and provide an interface that can be used to quickly and efficiently read and update configuration settings. In a cloud computing, it's a cloud-based storage service but could be a hosted database or other system as well. This pattern is useful for configuration settings that are shared between multiple applications and application instances, as a complementary store for some of the settings for applications, or to simplify administration of multiple applications.?Azure Key Vault?allows you to manage your organization's secrets and certificates in a centralized repository. The secrets and keys are further protected by Hardware Security Modules (HSMs). It can be used for secrets, key and certificate management. When using Key Vault application developers no longer need to store security information in their application. Access to a key vault requires proper authentication and authorization before a user or application can get access. Authentication is done via Azure Active Directory.

GitHub Actions

They can be used for a wide variety of tasks like automated testing, automatically responding to new issues or mentions, triggering code reviews, handling pull requests and branch management. They are defined in YAML and reside within GitHub repositories.

GitHub tracks events that trigger the start of workflows. Workflows are the unit of automation and they contain Jobs. Jobs use Actions to get work done.

This workflow will build and push a node.js application to an Azure Web App when a release is created:

on:
  release:
    types: [created]
env:
  AZURE_WEBAPP_NAME: your-app-name
  AZURE_WEBAPP_PACKAGE_PATH: '.'
  NODE_VERSION: '12.x'
jobs:
  build-and-deploy:
    name: Build and Deploy
    runs-on: ubuntu-latest
    environment: production
    steps:
    - uses: actions/checkout@v2
    - name: Use Node.js ${{ env.NODE_VERSION }}
      uses: actions/setup-node@v2
      with:
        node-version: ${{ env.NODE_VERSION }}
    - name: npm install, build, and test
      run: |
        # Build and test the project, then
        # deploy to Azure Web App.
        npm install
        npm run build --if-present
        npm run test --if-present
    - name: 'Deploy to Azure WebApp'
      uses: azure/webapps-deploy@v2
      with:
        app-name: ${{ env.AZURE_WEBAPP_NAME }}
        publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
        package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}        

Workflows include several standard syntax elements.

name?- optional but highly recommended. It appears in several places within the GitHub UI.

on?- is the event or list of events that will trigger the workflow.

jobs?- is the list of jobs to be executed.

runs-on?- tells Actions which runner to use.

steps?- is the list of steps for the job

uses?- tells Actions which predefined action needs to be retrieved, e.g. an action that installs node.js.

run?- tells the job to execute a command on the runner, e.g. to execute an NPM command.

GitHub provides several hosted runners, to avoid you needing to spin up your own infrastructure to run actions. For JavaScript code, you have implementations of node.js on Windows, MacOS and Linux. If you need to use other languages, a Docker container can be used. At present, the Docker container support is only Linux based. If you need different configurations to the ones provided, you can create a self-hosted runner. GitHub also provides a series of built-in environment variables.

It's important to follow best practices when creating actions:

  • Create chainable actions. Don't create large monolithic actions, instead, create smaller functional actions that can be chained together.
  • Version your actions like other code.
  • Provide a latest label.
  • Add appropriate documentation.
  • Add details action.yml metadata. At the root of your action, you will have an action.yml file. Make sure it has been populated with author, icon, any expected inputs and outputs.
  • Consider contributing to the marketplace.

Secrets are similar to environment variables but encrypted. They can be created at the Repository or the Organization level. If secrets are created at the organization level, access policies can be used to limit the repositories that can use them.

Dependency Management Strategy

It is essential that the software dependencies that are introduced in a project and solution can be properly declared and resolved. There are many aspects of a dependency management strategy.

  • Standardization?- allows a repeatable, predictable and automated process and usage.
  • Package formats and sources?- the dependency management strategy should include the selection of package formats and corresponding sources where to store and retrieve packages.
  • Versioning

The goal is to reduce the size of your own codebase and system. You achieve this by removing certain components of your solution. These are going to be centralized, reused, and maintained independently. You will remove those components and externalize them from your solution at the expense of introducing dependencies on other components.

To identify the dependencies in your codebase you can scan your code for patterns and reuse, as well as analyzing how the solution is composed of individual modules and components.

  • When certain pieces of code appear in several places it is a good indication that this code can be reused.
  • Another approach is to find code that might define components in your solution. You will look for code elements that have a high cohesion to each other, and low coupling with other parts of code.
  • Related to the high cohesion you can look for parts of the code that have a similar lifecycle and can be deployed and released individally.
  • Some parts of your codebase might have a slow rate of change. That code is stable and is not altered often. You can check your code repository to find the code with a low change frequency.
  • Whenever code and components are independent and unrelated to other parts of the system, it can potentially be isolated to a separate component and dependency.

At this point I will assume that you are already familiar with package management. How to package dependencies and the various packaging formats, feeds, sources, and package managers. Most package management system provide feeds where you can request packages to install in your applications. In Azure Artifacts, you can have multiple feeds in your projects, which are always private. It is recommended that you create one feed per type, this way it's clear what the feed contains. Each feed can contain one or more upstream and can manage its own security. Azure Artifacts has four different roles for protecting package feeds:

  • Reader?- Can list and restore (or install) packages from the feed
  • Collaborator?- Can save packages from upstream sources
  • Contributor?- Can push and unlist packages in the feed
  • Owner?- has all available permissions for a package feed

No alt text provided for this image

Continuous Delivery

The need to deliver fast, with high quality and cheap software production guided us to Continuous Delivery. Value should flow through our pipelines and piled up and released at once. To explain CD a bit more, these are the eight principles of continuous delivery:

  1. The process for releasing/deploying software must be repeatable and reliable
  2. Automate everything!
  3. If something is difficult or painful, do it more often
  4. Keep everything in source control
  5. Done means “released”
  6. Build quality in!
  7. Everybody has responsibility for the release process
  8. Improve continuously

The best way to move your software to production safely while maintaining stability is by separating your functional release from your technical release (deployment).

In order to deploy multiple times a day, everything needs to be automated and as such tests needs to run every time a new release is created. Instead of automating all your manual tests into automated UI tests, you need to rethink your testing strategy. Tests can be divided in 4 categories.

  • Business facing?- the tests are more functional and most of the time executed by end users of the system or by specialized testers that know the problem domain. Examples: functional tests, story tests, prototypes, and simulations.
  • Supporting the Team?- it helps a development team to get constant feedback on the product so they can find bugs fast and deliver a product with quality build in. Examples: exploratory tests, Usability tests, acceptance tests.
  • Technology facing?- the tests are rather technical and non-meaningful to business people. They are typical tests written and executed by the developers in a development team. Examples: Unit tests, Component tests, and System or integration tests.
  • Critique Product?- tests that are there to validate the workings of a product on it’s functional and non-functional requirements. Examples: Performance tests, load tests, security tests, and any other non-functional requirements test.

Big monolithic applications are more difficult to deliver. Every part that is changed might impact other parts that did not change. Breaking up your software into smaller, independent pieces, is in many cases a good solution. One approach to solving these issues is to implement microservices.

Microservices architecture

A microservice is an autonomous, independent deployable, and scalable software component. They are small, and they are focused on doing one thing very well, and they can run autonomously.You need to create to keep track of interfaces and how they interact with each other. And you need to maintain multiple application lifecycles and Continuous Delivery pipelines.

The traditional or classical deployment pattern was moving your software to a development stage, a testing stage, maybe an acceptance or staging stage, and finally a production stage. End users always use your application differently. Unexpected events will happen in a data center, multiple events from multiple users will occur at the same time, triggering some code that has not been tested in that way. To overcome this, we need to embrace the fact that some features can only be tested in production. Some modern deployment patterns that managing testing in production are:

  • Blue-green deployments?- a technique that reduces risk and downtime by running two identical environments. Once you have deployed and thoroughly tested the software in green, you switch the router or load balancer, so all incoming requests now go to green instead of blue. Green becomes live, and blue becomes idle. If something unexpected happens with green, you can immediately roll back to blue.
  • Canary releases?- A canary release is a way to identify potential problems as soon as possible without exposing all your end users to the issue at once. Canary releases can be implemented using a combination of feature toggles, traffic routing, and deployment slots.
  • Dark launching?- is launching a new feature and use it on the backend to get metrics. You run all data and calculation through your new feature, but it is not exposed yet.
  • A/B testing?- is a method of comparing two versions of a webpage or app against each other to determine which one performs better.
  • Progressive exposure or ring-based deployment
  • Feature toggles?- by using Feature or Release flags. You have a group of users who are better at dealing with new code and issues if they arise, and these people are often called Canaries. In the purest form, a feature toggle is an IF statement and you forward specific flags to specific deployments.

Infrastructure as Code

IaC is the concept of managing your operations environment in the same way you do applications or other code. Rather than manually making configuration changes or using one-off scripts to make infrastructure adjustments, the operations infrastructure is managed instead using the same rules and strictures that govern code development. Benefits of infrastructure as code are:

  • Improves traceability
  • Provides consistent environments from release to release
  • Consistency across development, test, and production environments
  • Automates scale-up and scale-out
  • Allows configurations to be version controlled
  • Provides code review and unit-testing capabilities to help manage infrastructure changes
  • Uses immutable service processes
  • Allows blue/green deployments
  • Treats infrastructure as a flexible resource that can be provisioned, de-provisioned, and re-provisioned according to the needs

There are several approaches that you can adopt to implement IaC and CaC. Two of the main methods of approach are the declarative (states what the final state should be) and the imperative (the script states the how for the final state of the machine by executing the steps to get to the finished state).

Idempotence?is a mathematical term that can be used in the context of?Infrastructure as Code?and?Configuration as Code. It is the ability to apply one or more operations against a resource, resulting in the same outcome. If you apply a deployment to a set of resources 100 times, you should end up with the same result after each application of the script or template.

Using?Resource Manager templates?will make your deployments faster and more repeatable.

  • Templates improve consistency?- by providing a common language for you and others to describe your deployments
  • Templates help express complex deployments?- they enable you to deploy multiple resources in the correct order
  • Templates reduce manual, error-prone tasks
  • Templates are code?- as a type of IaC, it can be shared, tested, and versioned like any other piece of software
  • Templates promote reuse
  • Templates are linkable?- you can write small templates that each define a piece of a solution, and then combine them to create a complete system

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "",
  "parameters": {},
  "variables": {},
  "functions": [],
  "resources": [],
  "outputs": {}
}        

  • Parameters?- This section is where you specify which values are configurable when the template runs. For example, you might allow template users to specify a username, password, or domain name.
  • Variables?- This section is where you define values that are used throughout the template.
  • Functions?- This section is where you define procedures that you don't want to repeat throughout the template.
  • Resources?- This section is where you define the Azure resources that make up your deployment.
  • Outputs?- This section is where you define any information you'd like to receive when the template runs.

Modularize templates

When using Azure Resource Manager templates, a best practice is to modularize them by breaking them out into the individual components. The primary methodology to use to do this is by using linked templates.

"resources": [
  {
      "apiVersion": "2021-05-25",
      "name": "linkTemplate",
      "type": "Microsoft.Resources/deployments",
      "properties": {
          "mode": "Incremental",
          Link_To_External_Template
      }
  }
]        

You can also nest a template within the main template

"resources": [
  {
    "apiVersion": "2021-05-25",
    "name": "NestedTemplate",
    "type": "Microsoft.Resources/deployments",
    "properties": {
      "mode": "Incremental",
      "template": {
        "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
        "contentVersion": "1.0",
        "resources": [
          {
            "type": "Microsoft.Storage/storageAccounts",
            "name": "[variables('storageName')]",
            "apiVersion": "2021-05-25",
            "location": "Central EU",
            "properties": {
              "accountType": "Standard_LRS"
            }
          }
        ]
      }
    }
  }
]        

Deployments modes

There are three options for deployments

  • Validate?- this option compiles the templates, validates the deployment, ensures the template is functional and the syntax is correct
  • Incremental mode?- this option only deploys whatever is defined in the template
  • Complete mode?- the resource manager deletes resources that exist in the resource group but aren't specified in the template

Azure Automation

It is an Azure service that provides a way for users to automate the manual, long-running, error-prone, and frequently repeated tasks that are commonly performed in a cloud and enterprise environment. Azure Automation saves time and increases the reliability of regular administrative tasks. You can even schedule the tasks to be performed automatically at regular intervals. You can automate processes using runbooks or automate configuration management by using Desired State Configuration (DSC).

Some Azure Automation capabilities are:

  • Process automation
  • Automation State Configuration
  • Update management
  • Start and stop virtual machines
  • Integration with GitHub, Azure DevOps, Git, or TFVC repositories
  • Automate AWS Resources
  • Manage shared resources
  • Run backups

Desired State Configuration?is a configuration management approach that you can use for configuration, deployment, and management of systems to ensure that an environment is maintained in a state that you specify and doesn't deviate from that state.

DSC consists of three primary components:

  • Configurations?- These are declarative PowerShell scripts that define and configure instances of resources. Upon running the configuration, DSC will apply the configuration, ensuring that the system exists in the state laid out by the configuration. DSC configurations are also idempotent: The Local Configuration Manager will continue to ensure that machines are configured in whatever state the configuration declares.
  • Resource?- They contain the code that puts and keeps the target of a configuration in the specified state.
  • Local Configuration Manager?- The LCM runs on the nodes or machines you wish to configure. This is the engine by which DSC facilitates the interaction between resources and configurations.

There are two methods of implementing DSC:

  • Push mode?- Where a user actively applies a configuration to a target node, and the pushes out the configuration.
  • Pull mode?- Where pull clients are configured to get their desired state configurations from a remote pull service automatically. This remote pull service is provided by a pull server which acts as a central control and manager for the configurations, ensures that nodes conform to the desired state and report back on their compliance status. The pull server can be set up as an SMB-based pull server or a HTTPS-based server. HTTPS based pull-server use the Open Data Protocol (OData) with the OData Web service to communicate using REST APIs.

3rd Party IaC Tools in Azure

Configuration management tools enable changes and deployments to be faster, repeatable, scalable, predictable, and able to maintain the desired state. Some advantages of using configuration management tools include Adherence to coding conventions, idempotency (the end state remains the same no matter how many times the code is executed) and distribution design to improve managing large numbers of remote servers.

Chef

Chef Infra?helps you to manage your infrastructure in the cloud, on-premises, or in a hybrid environment by using instructions (or recipes) to configure nodes. A node , or chef-client is any physical or virtual machine (VM), cloud, or network device that is under management by Chef Infra.

Chef Infra has 3 main architectural components:

  • Chef Server?- This is the management point
  • Chef Client?- This is a Chef agent that resides on the servers you are managing
  • Chef Workstation?- This is the Admin workstation where you create policies and execute management commands

Chef Infra also uses concepts called cookbooks and recipes which are essentially the policies that you define and apply to your servers. You can deploy Chef on Microsoft Azure from the Azure Marketplace using the Chef Automate image.

Puppet

Puppet?is a deployment and configuration management toolset that provides you with enterprise tools that you need to automate the entire lifecycle of your Azure infrastructure. It provides a series of open-source configuration management tools and projects, and a configuration management platform that allows you to maintain state in both your infrastructure and application deployments.

Puppet consists of the following components:

  • Puppet Master?is responsible for compiling code to create Puppet Agent catalogs
  • Puppet Agent?is the machines managed by the Puppet Master
  • Console Services?are the web-based user interface for managing your systems
  • Facts?are metadata related to state

Ansible

Ansible?is an open-source platform by Red Hat that automates cloud provisioning, configuration management, and application deployments. Using Ansible, you can provision your entire cloud infrastructure. In addition to provisioning and configuring applications and their environments, Ansible enables you to automate deployment and configuration of resources in your environment such as virtual networks, storage, subnets, and resources groups. With Ansible you don't have to install software on the managed machines.

Ansible models your IT infrastructure by describing how all your systems interrelate, rather than just managing one system at a time. The core components of Ansible are:

  • Control Machine?It can be any machine with Ansible installed on it and it requires that Python 2.7 or Python 3.5 be installed on the control machine as well
  • Managed Nodes?These are the machines and environments that are being managed. Managed nodes are sometimes referred to as hosts
  • Playbooks?are ordered lists of tasks that have been saved so you can run them repeatedly in the same order. Playbooks are Ansible’s language for configuration, deployment, and orchestration
  • Modules?Ansible works by connecting to your nodes, and then pushing modules to the nodes. Modules are the units of code that define the configuration. They are modular and can be reused across playbooks. They represent the desired state of the system and are executed over SSH by default, and are removed when finished
  • Inventory?is a list of managed nodes
  • Roles?are predefined file structures that allow automatic loading of certain variables, files, tasks, and handlers, based on the file's structure
  • Facts?are data points about the remote system that Ansible is managing
  • Plug-ins?are code that supplements Ansible's core functionality

Terraform

HashiCorp Terraform?is an open-source tool that allows you to provision, manage, and version cloud infrastructure. It codifies infrastructure in configuration files that describes the topology of cloud resources such as VMs, storage accounts, and networking interfaces. Terraform's CLI provides a simple mechanism to deploy and version the configuration files to Azure or any other supported cloud service. The CLI also allows you to validate and preview infrastructure changes before you deploy them.

Some of Terraform’s core components include:

  • Configuration files?in .tf ot .tf.json format are text-based configuration files that defines infrastructure and application configuration
  • Terraform CLI?from which you run configurations
  • Modules?are self-contained packages of Terraform configurations that are managed as a group. You use modules to create reusable components in Terraform and for basic code organization. A list of available modules for Azure is available on the Terraform Registry Modules webpage
  • Provider?is responsible for understanding API interactions and exposing resources
  • Overrides?are a way to create configuration files that are loaded last and merged into your configuration
  • Resources?are sections of a configuration file that define components of your infrastructure, such as VMs, network resources, containers, dependencies, or DNS records
  • Execution plan?that shows what Terraform will do when a configuration is applied
  • Resource graph?that you can use to build a dependency graph of all resources

Containers and Docker

Virtual Machines provides hardware virtualization and containers provides operating-system-level virtualization by abstracting the user space and not the entire operating system. The operating system level architecture is being shared across containers. This is what makes containers so lightweight. Containers are portable and allow you to have a consistent development environment. A container is a thing that runs a little program package, while Docker is the container runtime and orchestrator.

Containers are a solution to the problem of how to get software to run reliably when moved from one computing environment to another. A container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerizing the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.

Containers become very compelling when it comes to Microservices. Microservices is an approach to application development where every part of the application is deployed as a fully self-contained component, that can be individually scaled and updated. In production you might scale out to different numbers of instances across a cluster of servers depending on their resource demands as customer request levels rise and fall. The namespace and resource isolation of containers prevents one microservice instance from interfering with others and use of the Docker packaging format and APIs unlocks the Docker ecosystem for the microservice developer and application operator. With a good microservice architecture you can solve the management, deployment, orchestration, and patching needs of a container-based service with reduced risk of availability loss while maintaining high agility.

Azure provides a wide range of services that help you to work with containers:

  • Azure Container Instances (ACI)?- is a provisioning and management service of the infrastructure that will run the applications. It comes with the security of hypervisor isolation for each container group. This ensures that your containers aren't sharing an operating system kernel with other containers
  • Azure Kubernetes Service (AKS)?- is highly available, secure, and fully managed Kubernetes service
  • Azure Container Registry (ACR)?- lets you store and manage container images in a central registry
  • Azure Service Fabric?- allows you to build and operate always-on, scalable, distributed apps. It simplifies the development of microservice-based applications and their life cycle management including rolling updates with rollback, partitioning, and placement constraints
  • Azure App Service?- provides a managed service for both Windows and Linux based web applications and provides the ability to deploy and run containerized applications for both platforms. It provides options for auto-scaling and load balancing and is easy to integrate with Azure DevOps.

Azure Kubernetes Service (AKS)

Kubernetes is a cluster orchestration technology owned by the Cloud Native Computing Foundation and it is open source. AKS makes it quicker and easier to deploy and manage containerized applications without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand without taking applications offline. It manages health monitoring and maintenance, Kubernetes version upgrades, and patching.

Implementing Software Feedback

Deploying code into production and doing a health check is not enough. We are now looking beyond this point and continue to monitor how it runs. Getting feedback about what happens after the software is deployed to stay competitive and make our system better is essential. The right feedback loop must be fast, relevant, accessible and actionable. Engineering teams need to set action rules and own the complete code quality. Feedback is fundamental not only to DevOps practice but throughout the SDLC process.

Continuous Monitoring

Continuous monitoring builds on the concepts of CI/CD and it refers to the process and technology required to incorporate monitoring across each phase of your DevOps and IT operations lifecycles. It helps to continuously ensure the health, performance, and reliability of your application and infrastructure as it moves from development to production.

  • Azure Monitor?- is the unified monitoring solution in Azure that provides full-stack observability across applications and infrastructure in the cloud and on-premises.
  • Azure Log Analytics?- is a tool in the Azure portal to edit and run log queries from data collected by Azure Monitor Logs and interactively analyze their results.
  • Kusto Query Language (KQL)?- Kusto is the primary way to query Log Analytics. It provides both a query language and a set of control commands.
  • Application Insights?- a small package that you can install in your application and set up an Application Insights resource in the Microsoft Azure portal. It monitors your app and sends telemetry data to the portal. The application can run anywhere - it doesn't have to be hosted in Azure. Application Insights is aimed at the development team, to help you understand how your app is performing and how it's being used. It monitors request rates, response times, failure rates, dependency rates, exceptions, page views, load performance, AJAX calls, user and session counts, performance, diagnostics from the server, docker and your app as well as custom metrics that you write yourself.
  • App Center Diagnostics?- is a cloud service that helps developers monitor the health of an application, delivering the data needed to understand what happens when an app fails.
  • Azure Dashboards?- Visualizations such as charts and graphs can help you analyze your monitoring data to drill-down on issues and identify patterns.
  • IT Service Management Connector?- provides bi-directional integration between Azure monitoring tools and your ITSM tools – ServiceNow, Provance, Cherwell, and System Center Service Manager.

Feedback Mechanisms

Engaging customers throughout your product lifecycle is a primary Agile principle. Each team needs to interact directly with customers on the feature sets they own.

  • Continuous feedback?- Build in customer feedback loops. These can take many forms like Customer voice (make it easy for customers to give feedback, add ideas, and vote on next generation features), In-Product feedback and Customer demos.
  • Early adopter programs?- groups that gain access to early versions of working software which they then can provide feedback.
  • Data-driven decisions?- instrument your product to obtain useful data that can test various scenarios.

Site Reliability Engineering (SRE)

It empowers software developers to own the ongoing daily operation of their applications in production. The goal is to bridge the gap between the development team that wants to ship things as fast as possible and the operations team that doesn’t want anything to blow up in production. A key skill of a software reliability engineer is that they have a deep understanding of the application, the code, and how it runs, is configured, scales and monitoring.

Some of the typical responsibilities of a site reliability engineer are:

  • Proactively monitor and review application performance
  • Handle on-call and emergency support
  • Ensure software has good logging and diagnostics
  • Create and maintain operational runbooks
  • Help triage escalated support tickets
  • Work on feature requests, defects, and other development tasks
  • Contribute to overall product roadmap
  • Perform live site reviews and capture feedback for system outages

Both SRE and DevOps are methodologies addressing organizations’ needs for production operation management. DevOps raise problems and dispatch them to Dev to solve, the SRE approach is to find problems and solve some of them themselves. DevOps practices can help ensure IT helps rack, stack, configure, and deploy the servers and applications. The site reliability engineers can then handle the daily operation of the applications.

DevSecOps

If you want to take full advantage of the agility and responsiveness of a DevOps approach, IT security must also play an integrated role in the full life cycle of your apps. DevSecOps means thinking about application and infrastructure security from the start. It also means automating some security gates to keep the DevOps workflow from slowing down. Two features of DevSecOps pipelines that are not found in standard DevOps pipelines are:

  • Package management and the approval process associated with it.
  • Source Scanner as an additional step for scanning the source code.

Azure Security Center

It is a monitoring service that provides threat protection across all your services. Security Center can:

  • Provide security recommendations based on your configurations, resources, and networks.
  • Monitor security settings across on-premises and cloud workloads, and automatically apply required security to new services as they come online.
  • Continuously monitor all your services and perform automatic security assessments to identify potential vulnerabilities before they can be exploited.
  • Use Azure Machine Learning to detect and block malicious software from being installed on your services.
  • Analyze and identify potential inbound attacks and help to investigate threats and any post-breach activity that might have occurred.
  • Provide just-in-time access control for ports, thereby reducing your attack surface by ensuring the network only allows traffic that you require.

Open-Source Software

The concerns of the use of Open-Source components are that source code can be of low-quality, have no active maintenance, contain malicious code, have security vulnerabilities, have unfavorable licensing restrictions. The starting point for secure development is to use secure coding practices. OWASP regularly publish a set of Secure Coding Practices. Their guidelines currently cover advice in the following areas:

  • Input Validation
  • Output Encoding
  • Authentication and Password Management
  • Session Management
  • Access Control
  • Cryptographic Practices
  • Error Handling and Logging
  • Data Protection
  • Communication Security
  • System Configuration
  • Database Security
  • File Management
  • Memory Management
  • General Coding Practices

As the dependency on these third-party Open-Source software components increases, the risk of security vulnerabilities or hidden license requirements also increases compliance issues. Identifying such issues early in the release cycle gives you an advanced warning and allows you enough time to fix the issues. There are many available tools that can scan for these vulnerabilities within the build and release pipelines like:

  • OWASP ZAP penetration testing
  • SonarQube
  • CodeQL in GitHub
  • GitHub Dependabot alerts and security updates

Conclusion

Software and the Internet have transformed the world and its industries, from shopping to entertainment to banking. Software no longer supports a business, rather it becomes an integral component of every part of a business. Companies interact with their customers through software delivered as online services or applications and on all sorts of devices. They also use software to increase operational efficiencies by transforming every part of the value chain.

In the same way that companies transformed how they design, build, and deliver products using industrial automation, companies in today’s world must transform how they build and deliver software.

I hope you've enjoyed reading this article as much as I've enjoyed writing it. Feel free to share it.

要查看或添加评论,请登录

Fotios Tragopoulos的更多文章

  • From Silos to Synergy: Unleashing the Power of Integration Platforms in Europe

    From Silos to Synergy: Unleashing the Power of Integration Platforms in Europe

    An integration platform is a software solution that aids organizations in integrating data and processes across various…

  • AWS Service List

    AWS Service List

    The number of AWS offerings is constantly growing. Amazon Web Services offers hundreds of services, from compute and…

  • An Overview of AWS and CLF-C01 Exams

    An Overview of AWS and CLF-C01 Exams

    From a Business perspective the AWS Cloud Practitioner certification is necessary to understand the value of the AWS…

    2 条评论
  • Azure for Developers (AZ-204)

    Azure for Developers (AZ-204)

    This article is for cloud developers who participate in all phases of development from requirements definition and…

    2 条评论
  • Azure AZ-104 Preparation Guide

    Azure AZ-104 Preparation Guide

    This article intends to serve as a preparation guide for the AZ-104 exams. It is a fast read that cannot replace the…

    1 条评论
  • Kubernetes the Imperative and Declarative way

    Kubernetes the Imperative and Declarative way

    This article is written to hold together definition files - which I find important - in a cheat sheet which can be used…

  • Google Cloud Platform handbook for enthusiasts

    Google Cloud Platform handbook for enthusiasts

    One of the most difficult decisions for a Cloud Engineer is to decide which is the most appropriate cloud provider for…

    1 条评论
  • Javascript Design Patterns

    Javascript Design Patterns

    What is a Pattern? A pattern is a reusable solution for software design. What is NOT a Pattern? Patterns are not an…

  • Azure AZ-900 Preparation Guide

    Azure AZ-900 Preparation Guide

    This article intends to serve as a preparation guide for the AZ-900 exams. It is a fast read that cannot replace the…

  • ES5 to ES6 Fat Arrow Functions

    ES5 to ES6 Fat Arrow Functions

    Two factors influenced the introduction of arrow functions: shorter functions and no existence of this keyword. An…

社区洞察

其他会员也浏览了