S01E02 dapr-compass - how we organized the work
If you remember our plan to build an Order Management system with DapR, (see S01E01), this was a nice and real-life challenge where we decided to split the work between us according to experience, time, and speed of results.
For our choice of tools, we decided to focus more on the dapr implementation than on the underlying infrastructure, which seems to be the point with dapr. We used Azure DevOps as the main tracking and CI/CD tool, Terraform as the infrastructure desired state configuration tool, Azure as the cloud provider: but according to the specs, anything will do. The reasoning tells us that this should work regardless of the underlying choice of cloud objects, as long as we have:
- Infrastructure as a Code – scripted with Hashicorp Terraform
- A cache for the state store – the Redis cache PaaS service from Azure
- A Kubernetes cluster to run the solution – we used Azure Kubernetes service
- A key vault to store secrets – with Azure KeyVault
- An enterprise messaging service to exchange states – our beloved Azure Service Bus
Davide was the one who started this whole project. He is one of the Senior Cloud Solution Architects in the team and has extensive knowledge and development experience, mostly in the .net space, cloud-native, and distributed applications. He had partially already completed portions of work, so he “volunteered” to do all the heavy bulk of Order Confirmation, Stock Actor, Reservation Cancellation, Order Actor pieces.
Giulio is a recent addition to the Cloud Solution Architects team, and brings massive expertise on Microsoft stack architectures, having been involved in software development life-cycle, and DevOps in large enterprise customers. He started working on the Service Fa?ade (aka daprAPI), and once the MVP is ready, he will prepare the load testing with Locust.
Paola is the only non-developer Cloud Solution Architect of the team; she offered to help on the organization, documentation, social media, and copywriting (in her own words: someone has to do the job…).
I have been an evangelist and a software engineer, am now a Cloud Solution Architect with Azure DevOps and Kubernetes expertise, so I immediately jumped on the underlying infrastructure and Terraform deployment scripts as well as the Azure DevOps pipelines for IaC and CI/CD. Even though I will, at some point, also start writing code with dapr as well. ??
I worked on two Azure DevOps pipelines, keeping this simpler than I would normally do in a production environment, where for separation of duty each environment and each piece of code would be righteously entitled to its own pipeline. Azure Repo has been used to host the Terraform code for Azure resources, the YAML pipelines’ files, the API, and dapr code. A variable group in the Azure DevOps Library holds all the information for the environment so that the configuration can be parametrized for the load testing once we are ready.
Infrastructure as a Code (IAC) Pipeline.
This will build the Infrastructure as a Code, using Terraform. It will initialize against prerequisites and variables, checkout and publish the build artifacts, then in deploy stage will retrieve credentials from the service connections and Init, Plan and Apply with Terraform scripts the desired state configuration, install dapr and monitoring components with Helm, making it ready for the application that will be deployed with the following CI/CD pipeline. As a result of this pipeline, an AKS (Azure Kubernetes Service) cluster is created with all the dapr runtime and libraries needed for the run.
When setting up Kubernetes, Helm v3 charts install the following pods preparing the cluster to discover dapr enabled deployments and executing the dapr runtime, (see also the documentation):
- dapr-operator - manages components and k8s services endpoints for Dapr (state stores, pub-subs, etc.)
- dapr-sidecar-injector - injects Dapr into annotated pods
- dapr-placement - used for actors only. creates mapping tables that map actor instances to pods
- dapr-sentry - manages mTLS and acts as a certificate authority.
CI/CD Pipeline.
This is the pipeline building and releasing the code in the cluster. Starting from the docker build of fa?ade API, OrderActor, and StockActor docker images, deploying them with Helm to the AKS cluster built with Terraform. The pipeline is configured so that changes in the application will trigger a build only for that piece of code, leaving the other portions intact (this is usually the faster and safer option). I chose to use Helm here, in place of simpler YAML manifest files, for packaging each of the applications with all the components (Kubernetes pod/deployment, service, ingress) required to have it working in the cluster and to make it easier to parametrize values inside the pipeline. Each deployment has some dapr annotation to enable the injection of the sidecar, create the actor map, expose dapr port, and enable dapr component distributed tracing.
For authorizing the deployment operations on Azure (resources creation, image pulls in the container registry) I used two Service Principals: one with higher privileges on the subscription for the resource's creation, another one with only the permission to push images to ACR and access the cluster to launch deployments.
This is what our Azure Board looks like today:
Want to see how this works out? Follow this series of articles for the next challenge with dapr.
Microsoft (Azure) MVP - CC Manager Beta 80 - co-fondatore Cloudgen Verona - Author
4 年Sto seguendo con molta attenzione dapr dalla prima versione, ma ancora non sono riuscito a portarlo in un progetto in questi mesi. Ho letto con interesse i primi due episodi, ma ancora non mi è chiaro il vantaggio concreto di dapr rispetto al PaaS o a AKS. Nei prossimi episodi potreste affrontare, secondo la vostra esperienza, quando dapr ha una marcia in più rispetto ad altri servizi? Grazie