How to Create a Kubernetes-based Architecture in Azure using Azure DevOps and Terraform - Part III

How to Create a Kubernetes-based Architecture in Azure using Azure DevOps and Terraform - Part III

In the second part of this article series, we created the Azure Kubernetes Service, Application Gateway, and Container Registry. In this last article from our series a .NET 6 Rest API will be deployed in our Kubernetes cluster using helm charts stored in an ACR repo.

No alt text provided for this image
Architecture

Build and Deploy the Helm Chart to Container Registry

To avoid reinventing the wheel every time a new POD needs to be deployed in the Kubernetes cluster we are making use of Helm to manage our Chart packages.?The source code for the Helm Chart used in this tutorial available at this link.

The output for the Helm Chart Build Pipeline can be visited at this link.?Following we will talk about the Build Pipeline for the Helm Chart.

  • Helm Install

The helm install task is the same used in the AKS release pipeline.

  • Helm: Package

This task is responsible for the creation of a .tgz file containing the Helm Package configuration files.

No alt text provided for this image
Helm package task

Settings:

Command: Package
Chart Path: src/chart
Destination: $(Build.ArtifactStagingDirectory)?Enable the Save checkbox

  • Azure CLI: Push Chart

After creating and saving the helm chart task in the local helm repo it's time to deploy it in our Azure Container Registry resource:

No alt text provided for this image
Azure CLI task responsible for pushing the create Helm Chart Package into ACR repository

Settings:

Azure subscription: point to your Azure Subscription
Script Location: Inline Script
Inline Script:
helm registry login riceastusallacrk8s.azurecr.io --username riceastusallacrk8s --password $(Release.Acr.Code)
helm push aspnetcore-1.0.0.tgz oci://riceastusallacrk8s.azurecr.io         
$(Release.Acr.Code) is the ACR admin password:
No alt text provided for this image
ACR admin password

Build .NET 6 Rest API and deploy to Container Registry

The repo for the .NET 6 API won't be available because it's a simple API created using Visual Studio with docker support enabled.

The output for the .NET 6 Rest API Build Pipeline can be visited at this link.?Following we will talk about the Build Pipeline for the API.

  • Docker: Build and Push

We are using a pre-configured azure pipeline task to build and push the docker image for the Rest API.

No alt text provided for this image
Docker build and push container image task

Settings:

Container registry: point to your Azure Container Registry Service Connection
Container repository: sampleapi
Command: buildAndPush
Dockerfile: Api/Api/Dockerfile
Build context: Api
Tags: $(Build.BuildNumber)
Enable Add Pipeline metadata to image(s)
Enable Add base image metadata to image(s)

The container registry service connection details can also be seen in the following picture:

No alt text provided for this image
ACR service connection details

The Docker Password parameter is the same ACR admin password used in the Helm Chart build pipeline.

Deploy .NET 6 Rest API to AKS cluster

Finally, it's time to deploy a sample rest api in our cluster and see everything working together

Take a look at the Release Pipeline for the sample Rest Api in this link.

Following, the release pipeline for the sample application will be explained:

  • File Creator: values.yaml

This task creates the Values file used by the helm chart to parameterize the API deployment. The patter __[Env Variable Name]__ is the pattern used in the following Tokenizer task to match environment variables names with code sections which should be replaced by the variables values.

No alt text provided for this image
Helm Chart Values file

Settings:

File path: values/values.yaml File Content:
namespace: __Release.Namespace__

environment: "__Release.Abbreviation__"

apphost: __Release.Host__

name: __Release.Image.Name__

container:
  pullPolicy: Always
  acr: __Release.Acr.Name__.azurecr.io
  image: __Release.Image.Name__
  tag: __Release.Image.Tag__
  port: 80
  probeurl: __Release.Container.ProbeUrl__
replicas: 1

ingress:
  backendpathprefix: "/"
  path: __Release.App.Path__
  sslcertificate: "__Release.CertName__"        


  • Tokenizer

This task replaces the values file patterns __[Env Var Name]__ with previously created environment variables. Just pay attention to the Source Files Pattern property value.

No alt text provided for this image
Task responsible for replacing the pattern __[Var Name]__ by environment variables values

  • Helm: Install 3.11.2

Same as the in the previous release pipeline for the AKS infrastructure.

  • Azure CLI: Az Acr Helm Repo Add

Finally, we perform the helm upgrade command. Before, the ACR where our Helm Chart Package was stored by his build pipeline must be added to the local helm repositories, and the agent must be logged to the AKS cluster by the az aks get-credentials command.

No alt text provided for this image
Azure CLI command task which upgrade the helm release in our AKS cluster

Settings:

Azure subscription: point to your Azure Subscription
Script Location: Inline Script
Inline Script:
az acr helm repo add --n $(Release.Acr.Name)
helm registry login $(Release.Acr.Name).azurecr.io --username $(Release.Acr.Name) --password $(Release.Acr.Code)
az aks install-cli
az aks get-credentials --resource-group $(Release.Aks.ResourceGroup) --name $(Release.Aks.Name)
helm upgrade --namespace $(Release.Namespace) --install --reset-values --force --values values/values.yaml $(Release.Chart.AspNetCore)-$(Release.Image.Name) oci://$(Release.Acr.Name).azurecr.io/$(Release.Chart.AspNetCore) --version 1.0.0        

Login to AKS Cluster and See the Results

In your command line tool run the following commands to see the PODs running in our Cluster:

az login
az aks get-credentials --resource-group ric-eastus-all-rg-k8s --name ric-eastus-all-aks-k8s-01
kubectl get pods --all-namespaces        

The output should be something similar to the following picture:

No alt text provided for this image
Pods running in our created AKS cluster

In the azure portal we can also get the results:

No alt text provided for this image
Pods running in our created AKS cluster

Finally, using the AGW URL navigate to the health check path:

https://ric-eastus-all-k8s.eastus.cloudapp.azure.com/healthz        
No alt text provided for this image
Pod responding to an HTTPS request

Conclusion

In this third and final article, we performed the deployment of a Rest API in our AKS cluster and executed an HTTP request from the public internet.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了