Create an AKS Cluster With Application Gateway (AGIC) using Add-On(Greenfield) with External DNS & Lets Encrypt
Aslam Chandio
Cloud Engineer || 3x GCP Certified || 6x Azure Certified || 1x AWS Certified || 1x VMware Certified || Docker & Kubernetes|| Terraform || Linux || MCSA Certified ||
There are times where a simple NGINX load balancer is not enough for our needs, and that’s where an application gateway comes into play. By default AKS comes with a simple Azure Load Balancer that will have the name of “kubernetes”.
Application Gateway Ingress Controller (AGIC) is a Kubernetes application, which makes it possible for Azure Kubernetes Service (AKS) to use Application Gateway to expose applications to the Internet.
What is an Application Gateway?
Azure Application Gateway is a web traffic (OSI layer 7) load balancer that enables you to manage traffic to your web applications. Traditional load balancers operate at the transport layer (OSI layer 4 - TCP and UDP) and route traffic based on source IP address and port, to a destination IP address and port.
Application Gateway can make routing decisions based on additional attributes of an HTTP request, for example URI path or host headers. For example, you can route traffic based on the incoming URL. So if /images is in the incoming URL, you can route traffic to a specific set of servers (known as a pool) configured for images. If /video is in the URL, that traffic is routed to another pool that's optimized for videos.
What is Azure Load Balancer?
Load balancing refers to efficiently distributing incoming network traffic across a group of backend servers or resources.
Azure Load Balancer operates at layer 4 of the Open Systems Interconnection (OSI) model. It's the single point of contact for clients. Load balancer distributes inbound flows that arrive at the load balancer's front end to backend pool instances. These flows are according to configured load-balancing rules and health probes. The backend pool instances can be Azure Virtual Machines or instances in a Virtual Machine Scale Set.
A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual network. These connections are accomplished by translating their private IP addresses to public IP addresses. Public Load Balancers are used to load balance internet traffic to your VMs.
An internal (or private) load balancer is used where private IPs are needed at the frontend only. Internal load balancers are used to load balance traffic inside a virtual network. A load balancer frontend can be accessed from an on-premises network in a hybrid scenario.
Which one Should I use?
It depends. Are you going to have a scalable application? Maybe you need to deal with certificates?How about having to route traffic based on the URI path? Then you should go with Application Gateway.
If you want to simply balance the load between the vms without the need of certificates, just a public or private ip address, then you should definitely go with Azure load balancer, because it’s cheaper than Application Gateway and it may fit your needs.
Creating the AKS With Application Gateway Ingress Controller(AGIC) using Azure CLI
Step 1 — Create vNet with Subnet, NSG & Azure Routes in Azure Cloud
# Create Resource Group
az group create --location westus2 --resource-group aks-rg-westus
az group list -o table
az network vnet create -g aks-rg-westus \
-n vNet_aks_uswest \
--address-prefix 10.0.0.0/8 \
--subnet-name aks-subnet \
--subnet-prefix 10.200.0.0/16
az network vnet subnet list --resource-group aks-rg-westus --vnet-name vNet_aks_uswest
az network vnet subnet list --resource-group aks-rg-westus --vnet-name vNet_aks_uswest -o table
az network vnet show --resource-group aks-rg-westus --name vNet_aks_uswest
az network vnet subnet show -g aks-rg-westus -n aks-subnet --vnet-name vNet_aks_uswest
az network vnet subnet show \
--resource-group aks-rg-westus \
--vnet-name vNet_aks_uswest \
--name aks-subnet \
--query id \
-o tsv
/subscriptions/3344b61d6789a-425f-acdf-bb4f12567812/resourceGroups/aks-rg-westus/providers/Microsoft.Network/virtualNetworks/vNet_aks_uswest/subnets/aks-subnet
az network nsg list -o table
az network nsg create --resource-group aks-rg-westus --name aks-nsg-westus --location westus2
az network nsg list -o table
az network vnet subnet update --resource-group aks-rg-westus \
--vnet-name vNet_aks_uswest --name aks-subnet --network-security-group aks-nsg-westus
az network route-table create -g aks-rg-westus -n aks-route-table-westus
az network vnet subnet update --resource-group aks-rg-westus \
--vnet-name vNet_aks_uswest --name aks-subnet --route-table aks-route-table-westus
az aks get-versions --location westus2 -o table
Kubenet Network Plugin:
AKS clusters use kubenet and create an Azure virtual network and subnet for you by default. With kubenet, nodes get an IP address from the Azure virtual network subnet. Pods receive an IP address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address. This approach greatly reduces the number of IP addresses you need to reserve in your network space for pods to use.
Prerequisites:
The cluster identity used by the AKS cluster must at least have the Network Contributor role on the subnet within your virtual network. CLI helps set the role assignment automatically. If you're using an ARM template or other clients, you need to manually set the role assignment. You must also have the appropriate permissions, such as the subscription owner, to create a cluster identity and assign it permissions. If you want to define a custom role instead of using the built-in Network Contributor role, you need the following permissions:Microsoft.Network/virtualNetworks/subnets/join/actionMicrosoft.Network/virtualNetworks/subnets/read
With kubenet, only the nodes receive an IP address in the virtual network subnet. Pods can't communicate directly with each other. Instead, User Defined Routing (UDR) and IP forwarding handle connectivity between pods across nodes. UDRs and IP forwarding configuration is created and maintained by the AKS service by default, but you can bring your own route table for custom route management if you want. You can also deploy pods behind a service that receives an assigned IP address and load balances traffic for the application. The following diagram shows how the AKS nodes receive an IP address in the virtual network subnet, but not the pods:
Azure supports a maximum of 400 routes in a UDR, so you can't have an AKS cluster larger than 400 nodes. AKS virtual nodes and Azure Network Policies aren't supported with kubenet. Calico Network Policies are supported.
Step 2 — Create AKS Cluster with Kubenet Plugin
az aks create --name agic-aks-cluster \
--resource-group aks-rg-westus \
--vnet-subnet-id /subscriptions/335671d-4f3a-425f-a12f-bb4f12349712/resourceGroups/aks-rg-westus/providers/Microsoft.Network/virtualNetworks/vNet_aks_uswest/subnets/aks-subnet \
--network-plugin azure \
--service-cidr 10.32.0.0/16 \
--dns-service-ip 10.32.0.10 \
--enable-managed-identity \
--enable-addons ingress-appgw \
--appgw-name agic-appgw \
--appgw-subnet-cidr "10.254.0.0/16" \
--node-count 1 \
--enable-cluster-autoscaler \
--min-count 1 \
--max-count 2 \
--ssh-key-value ~/.ssh/azurekey.pub
Update AKS Cluster allowing Network IP
az aks list -g aks-rg-westus
az aks list -g aks-rg-westus -o table
az aks show \
--resource-group aks-rg-westus \
--name agic-aks-cluster \
--query apiServerAccessProfile.authorizedIpRanges
az aks update \
--resource-group aks-rg-westus \
--name agic-aks-cluster \
--api-server-authorized-ip-ranges 39.55.88.99/32
Step 3 — Assign network contributor role to AGIC addon Managed Identity
# Get application gateway id from AKS addon profile
appGatewayId=$(az aks show -n agic-aks-cluster -g aks-rg-westus -o tsv --query "addonProfiles.ingressApplicationGateway.config.effectiveApplicationGatewayId")
echo $appGatewayId
Output : /subscriptions/3341261d-123a-445f-bfdf-bb4f1f1e9712/resourceGroups/MC_aks-rg-westus_agic-aks-cluster_westus2/providers/Microsoft.Network/applicationGateways/agic-appgw
# Get Application Gateway subnet id
appGatewaySubnetId=$(az network application-gateway show --ids $appGatewayId -o tsv --query "gatewayIPConfigurations[0].subnet.id")
echo $appGatewaySubnetId
Output : /subscriptions/3344b61d-4f3a-425f-acdf-bb4f1f65789712/resourceGroups/aks-rg-westus/providers/Microsoft.Network/virtualNetworks/vNet_aks_uswest/subnets/agic-appgw-subnet
*****OR****
az network vnet subnet show \
--resource-group aks-rg-westus \
--vnet-name vNet_aks_uswest \
--name agic-appgw-subnet \
--query id \
-o tsv
Ouput: /subscriptions/3344b61d-4f3a-1234-acdf-bb4f1f4567712/resourceGroups/aks-rg-westus/providers/Microsoft.Network/virtualNetworks/vNet_aks_uswest/subnets/agic-appgw-subnet
# Get AGIC addon identity
agicAddonIdentity=$(az aks show -n agic-aks-cluster -g aks-rg-westus -o tsv --query "addonProfiles.ingressApplicationGateway.identity.clientId")
echo $agicAddonIdentity
Output: ba4a6927-ddf1-40a9-9a11-fdc85c7be408
# Assign network contributor role to AGIC addon Managed Identity to subnet that contains the Application Gateway
az role assignment create --assignee $agicAddonIdentity --scope $appGatewaySubnetId --role "Network Contributor"
Step 4 — Verify AKS Cluster
az aks get-credentials --resource-group aks-rg-westus --name agic-aks-cluster --overwrite-existing
kubectl config get-clusters
kubectl config get-contexts
kubectl config get-users
kubectl config view
kubectl cluster-info
kubectl get cs
kubectl get nodes
kubectl get nodes -o wide
Step 5 — Verify AKS Add On
# List Kubernetes Deployments in kube-system namespace
kubectl get deploy -n kube-system
Observation:
1. Should find the deployment with name "ingress-appgw-deployment"
2. This is the Azure Application Gateway Ingress Controller Kubernetes Deployment Object
# List Pods
kubectl get pods -n kube-system
# Describe Pod
kubectl -n kube-system describe pod <AGIC-POD-NAME>
kubectl -n kube-system describe pod ingress-appgw-deployment-55965f45cf-x28fm
Observation:
1. Review the line where you can find ingress current version downloaded
2. Pulling image "mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:1.5.3"
3. You can also run the below command to find the AppGW Ingress version
kubectl get deploy ingress-appgw-deployment -o yaml -n kube-system | grep "image:"
# Verify ingress-appgw pod Logs
kubectl -n kube-system logs -f $(kubectl -n kube-system get po | egrep -o 'ingress-appgw-deployment[A-Za-z0-9-]+')
Step 6 — Cost Saving - Stop & Start Application Gateway
# Application Gateway Stop & Start
In portal, go to Overview Tab and click on "STOP"
# Azure Application Gateway STOP
az network application-gateway stop --name <APPGW-NAME> --resource-group <RESOURCE-GROUP>
az network application-gateway stop --name agic-appgw --resource-group MC_aks-rg-westus_agic-aks-cluster_westus2
az network application-gateway start --name agic-appgw --resource-group MC_aks-rg-westus_agic-aks-cluster_westus2
Step 7 — Cost Saving - Azure AKS Cluster Stop & Start
az aks stop --name agic-aks-cluster --resource-group aks-rg-westus
az aks start --name agic-aks-cluster --resource-group aks-rg-westus
az aks list -g aks-rg-westus -o table
az aks delete --name agic-aks-cluster -g aks-rg-westus
az group list -o table
az group delete --name aks-rg-westus
Additional References:
Kubernetes ExternalDNS to create Record Sets in Azure DNS from AKS
Automating DNS registration
Introduction
After deploying an application and its services into a Kubernetes cluster, a question rises on the surface, how to access it with a custom domain name ? A simple solution would be to create an A record that points the domain name into the service IP address. This could be done manually, so it will be too hard to scale as you add many services. And this could be fully automated by using External DNS! This tutorial describes how to manage custom domain names in Azure DNS using External DNS in AKS.
External DNS is a Kubernetes controller that watches for new Ingresses and Services with specific annotations, then creates corresponding DNS records in Azure DNS. It is available as an opensource project in Github:?https://github.com/kubernetes-sigs/external-dns. It supports more than 30 DNS providers including Azure DNS and Private DNS Zone.
External DNS pods authenticates to Azure DNS using one of three methods:
Step 1 — Fetching the Kubelet identity
# Option-1: Get PRINCIPAL_ID using Commands
CLUSTER_GROUP=aks-rg-westus
CLUSTERNAME=agic-aks-cluster
PRINCIPAL_ID=$(az aks show --resource-group $CLUSTER_GROUP --name $CLUSTERNAME --query "identityProfile.kubeletidentity.objectId" --output tsv)
echo $PRINCIPAL_ID
a8f3f690-10d5-48b3-a369-203280449864
# Option-2: Get PRINCIPAL_ID using Azure Portal
1. Go to Resource Groups -> MC_agicdemo_agic-cluster_eastus
2. Find Managed Identity Resource with name "agic-cluster-agentpool"
3. Make a note of "Object (principal) ID"
Object (principal) ID : a8f234590-10d5-58b3-a359-2032867896864
Step 2 — Assign rights for the Kubelet identity
AZURE_DNS_ZONE="chandiolab.store"
AZURE_DNS_ZONE_RESOURCE_GROUP="aks-rg-westus"
# Option-1: fetch DNS id used to grant access to the kublet identity
DNS_ID=$(az network dns zone show --name $AZURE_DNS_ZONE --resource-group $AZURE_DNS_ZONE_RESOURCE_GROUP --query "id" --output tsv)
echo $DNS_ID
/subscriptions/3344b5467-4f3a-425f-acdf-bb412349712/resourceGroups/aks-rg-westus/providers/Microsoft.Network/dnszones/chandiolab.store
## Option-2: Get DNS_ID using Azure Portal
1. Go to Azure Portal -> DNS Zones -> YOURDOMAIN.COM (kubeoncloud.com)
2. Go to Properties
3. Make a note of "Resource ID"
DNS_ID= /subscriptions/3344b123-4f3a-425f-a45f-bb4f14567712/resourceGroups/aks-rg-westus/providers/Microsoft.Network/dnszones/chandiolab.store
# Grant access to Azure DNS zone for the kubelet identity.
az role assignment create --role "DNS Zone Contributor" --assignee $PRINCIPAL_ID --scope $DNS_ID
# Verify the Azure Role Assignment in Managed Identities
1. Go to Azure Portal -> Managed Identities -> agic-cluster-agentpool
2. Go to Azur role assignments Tab
3. We can see "DNS Zone Contributor" role assigned to "agic-cluster-agentpool" Managed Identity
Step 3 — Gather Information Required for azure.json file
# To get Azure Tenant ID
az account show --query "tenantId"
"e3321234-1937-40ef-9bbd-f78dd12344ea"
# To get Azure Subscription ID
az account show --query "id"
"1144456d-4f3a-425f-acdf-bb4f1f4566612"
Step 4 — Make a note of Client Id and update in azure.json
# Get value for userAssignedIdentityID
"userAssignedIdentityID": "b1171234ec-9612-4068-a4e3-a86e581234c9"
Step 5 — Create azure.json file
{
"tenantId": "e3321234f-1937-40ef-9bbd-f78dd67894ea",
"subscriptionId": "3344b61d-4f3a-425f-acdf-be4f1f123412",
"resourceGroup": "aks-rg-westus",
"useManagedIdentityExtension": true,
"userAssignedIdentityID": "b116789c-9612-4888-a4e3-a86e53456bc9"
}
Step 6 — Use the azure.json file to create a Kubernetes secret
# List k8s secrets
kubectl get secrets
# Create k8s secret with azure.json
cd 14-ExternalDNS-for-AzureDNS-on-AKS
kubectl create secret generic azure-config-file --namespace "default" --from-file azure.json
# List k8s secrets
kubectl get secrets
# k8s secret output as yaml
kubectl get secret azure-config-file -o yaml
Observation:
1. You should see a base64 encoded value
2. Decode it with https://www.base64decode.org/ to review it
Step 7 — Review external-dns.yaml manifest
领英推荐
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods", "nodes"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.13.5
args:
- --source=service
- --source=ingress
- --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.
- --provider=azure
- --azure-resource-group=MyDnsResourceGroup # (optional) use the DNS zones from the tutorial's resource group
- --txt-prefix=externaldns-
volumeMounts:
- name: azure-config-file
mountPath: /etc/kubernetes
readOnly: true
volumes:
- name: azure-config-file
secret:
secretName: azure-config-file
Step 8 — Deploy ExternalDNS
# Deploy ExternalDNS
kubectl apply -f kube-manifests/external-dns.yaml
# Verify ExternalDNS by Deployments,SA,Clusterrole,Clusterrolebinding
kubectl get sa
kubectl describe sa external-dns
kubectl get clusterrole (external-dns)
kubectl get clusterrolebinding (external-dns-viewer)
kubectl get deployment
kubectl get pods
# Verify ExternalDNS Logs
kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+')
b1173bec-9612-4068-a4e3-a86e58998bc9." (which is agic-aks-cluster-agentpool Client ID : b1173bec-9612-4068-a4e3-a86e58998bc9)
kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+')
Step 9 — Deploy Simple Application
NginxApp1-Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-nginx-deployment
labels:
app: app1-nginx
spec:
replicas: 1
selector:
matchLabels:
app: app1-nginx
template:
metadata:
labels:
app: app1-nginx
spec:
containers:
- name: app1-nginx
image: aslam24/nginx-web-newapp:v1
ports:
- containerPort: 80
02-NginxApp1-ClusterIP-Service.yaml
apiVersion: v1
kind: Service
metadata:
name: app1-nginx-clusterip-service
labels:
app: app1-nginx
spec:
type: ClusterIP
selector:
app: app1-nginx
ports:
- port: 80
targetPort: 80
03-Ingress-with-ExternalDNS.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginxapp1-ingress-service
spec:
ingressClassName: azure-application-gateway
rules:
- host: myapp1.chandiolab.store
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-nginx-clusterip-service
port:
number: 80
Step 10 — Verify Record Set in DNS Zones ->chandiolab.store
# Template Command
az network dns record-set a list -g <Resource-Group-dnz-zones> -z <yourdomain.com>
# Replace DNS Zones Resource Group and yourdomain
az network dns record-set a list -g aks-rg-westus -z chandiolab.store -o table
# Additionally you can review via Azure Portal
Go to Portal -> DNS Zones -> <YOUR-DOMAIN>
Review records in "Overview" Tab
External DNS References
AGIC Ingress - SSL using Lets Encrypt
This section configures your AKS to use LetsEncrypt.org and automatically obtain a TLS/SSL certificate for your domain. The certificate is installed on Application Gateway, which performs SSL/TLS termination for your AKS cluster. The setup described here uses the cert-manager Kubernetes add-on, which automates the creation and management of certificates.
Step 1 — Install Cert Manager using Helm
Install Helm CLI
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
# Verify Helm Version
helm version
Helm References
Step 2 — Create cert-manager Namespace
# Create Namespace
kubectl create namespace cert-manager
kubectl get ns
# Label the cert-manager namespace to disable resource validation
kubectl label namespace cert-manager cert-manager.io/disable-validation=true
Step 3 — Install Cert Manager using Helm
# Review Releases and Update latest release number below
https://github.com/cert-manager/cert-manager/releases/
# To Install CRDs manually without HELM
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.0/cert-manager.crds.yaml
# To Uninstall CRDs manually
kubectl delete -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.0/cert-manager.crds.yaml
# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
# List Helm Repos
helm repo list
# Update your local Helm chart repository cache
helm repo update
helm search repo cert-manager
# Install the cert-manager Helm chart
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.14.0 \
--set installCRDs=true
(Work for me in this way)
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.14.0 --set installCRDs=true
helm list
# List Helm Releases in cert-manager namespace
helm list -n cert-manager
helm list -n cert-manager --output yaml
# helm status in cert-manager namespace
helm status cert-manager --show-resources -n cert-manager
# Verify Cert Manager Services
kubectl get svc --namespace cert-manager
# Verify All Kubernetes Resources created in cert-manager namespace
kubectl get all --namespace cert-manager
Step 4 — Review or Create Cluster Issuer Kubernetes Manifest
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected]
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: azure/application-gateway
for production: https://acme-v02.api.letsencrypt.org/directory
for Staging : https://acme-staging-v02.api.letsencrypt.org/directory
Step 5 — Deploy Cluster Issuer
# Deploy Cluster Issuer
cd 22-AGIC-SSL-with-LetsEncrypt/
kubectl apply -f 01-CertManager-ClusterIssuer/cluster-issuer.yml
# List Cluster Issuer
kubectl get clusterissuer
# Describe Cluster Issuer
kubectl describe clusterissuer letsencrypt
# List Cluster Issuer
kubectl get clusterissuer
# Describe Cluster Issuer
kubectl describe clusterissuer letsencrypt
# Verify if Cluster Issuer is already deployed
kubectl get clusterissuer
Step 6 — Deploy Simple Application
Create or Review Ingress SSL Kubernetes Manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-nginx-deployment
labels:
app: app1-nginx
spec:
replicas: 1
selector:
matchLabels:
app: app1-nginx
template:
metadata:
labels:
app: app1-nginx
spec:
containers:
- name: app1-nginx
image: aslam24/nginx-web-onix:v1
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: app1-nginx-clusterip-service
labels:
app: app1-nginx
spec:
type: ClusterIP
selector:
app: app1-nginx
ports:
- port: 80
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: app2-nginx-deployment
labels:
app: app2-nginx
spec:
replicas: 1
selector:
matchLabels:
app: app2-nginx
template:
metadata:
labels:
app: app2-nginx
spec:
containers:
- name: app2-nginx
image: aslam24/nginx-web-fablesmaster:v1
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: app2-nginx-clusterip-service
labels:
app: app2-nginx
annotations:
spec:
type: ClusterIP
selector:
app: app2-nginx
ports:
- port: 80
targetPort: 80
Step 7 — Create or Review Ingress SSL Kubernetes Manifest
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-ssl-letsencrypt
annotations:
appgw.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: letsencrypt
spec:
ingressClassName: azure-application-gateway
tls:
- secretName: sapp1-chandiolab-secret
hosts:
- sapp1.chandiolab.store
- secretName: sapp2-chandiolab-secret
hosts:
- sapp2.chandiolab.store
rules:
- host: sapp1.chandiolab.store
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app1-nginx-clusterip-service
port:
number: 80
- host: sapp2.chandiolab.store
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app2-nginx-clusterip-service
port:
number: 80
Step 8 — Deploy Kubernetes Manifests & Verify
# Deploy Application
kubectl apply -f 02-kube-manifests/
# Verify Pods
kubectl get pods
# Verify Kubernetes Secrets
kubectl get secrets
# YAML Output of Kubernetes Secrets
kubectl get secret sapp1-chandiolab-secret -o yaml
kubectl get secret sapp2-chandiolab-secret -o yaml
Observation:
1. Review tls.crt and tls.key
# Verify SSL Certificates (It should turn to True)
kubectl get certificate
kubectl get pods -n cert-manager
# Verify external-dns Controller logs
kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+')
az network dns record-set a list -g aks-rg-westus -z chandiolab.store -o table
Access Application
https://sapp1.chandiolab.store
https://sapp2.chandiolab.store
Observation:
1. Should redirect to HTTPS url
2. We have added AGIC ssl-redirect annotation in Ingress Manifest
# Application HTTPS URLs
https://sapp1.chandiolab.store
https://sapp2.chandiolab.store
Observation
1. Review SSL Certificate from browser after accessing URL
2. We should valid SSL certificate generated by LetsEncrypt
DNS Zone
Step 9 — Clean-Up
# Delete Applications
kubectl delete -f 02-kube-manifests/
Cert Manager References
Technical Services Director at Osprey Approach
5 个月Nice!
DevOps & Cloud Enthusiast | CEO of EZOps Cloud
7 个月Nice content!