Networking Concepts in Azure Kubernetes Service (AKS) - Part1
Aslam Chandio
Cloud Engineer || 3x GCP Certified || 6x Azure Certified || 1x AWS Certified || 1x VMware Certified || Docker & Kubernetes|| Terraform || Linux || MCSA Certified ||
In a container-based, microservices approach to application development, application components work together to process their tasks. Kubernetes provides various resources enabling this cooperation:
Create vNet with Subnet ,NSG & Azure Routes
az group list -o table
# Create Resource Group
az group create --name aks-rg --location eastus
# Create Virtual Network & default Subnet
az network vnet create -g aks-rg \
-n vNet_aks_useast \
--address-prefix 10.0.0.0/8 \
--subnet-name aks-subnet \
--subnet-prefix 10.100.0.0/16
az network vnet subnet list --resource-group aks-rg --vnet-name vNet_aks_useast -o table
az network vnet show --resource-group aks-rg --name vNet_aks_useast
Get Subnet vNet ID:
az network vnet subnet show \
--resource-group aks-rg \
--vnet-name vNet_aks_useast \
--name aks-subnet \
--query id \
-o tsv
az network vnet subnet show -g aks-rg -n aks-subnet --vnet-name vNet_aks_useast
/subscriptions/ea4c38a0-2746-9999-1111-14f8885743f77c/resourceGroups/aks-rg/providers/Microsoft.Network/virtualNetworks/vNet_aks_useast/subnets/aks-subnet
az network nsg list -o table
az network nsg create --resource-group aks-rg --name aks-nsg --location eastus
az network vnet subnet update --resource-group aks-rg \
--vnet-name vNet_aks_useast --name aks-subnet --network-security-group aks-nsg
az network route-table create -g aks-rg -n aks-route-table
az network vnet subnet update --resource-group aks-rg \
--vnet-name vNet_aks_useast --name aks-subnet --route-table aks-route-table
Kubenet Network Plugin:
AKS clusters use kubenet and create an Azure virtual network and subnet for you by default. With kubenet, nodes get an IP address from the Azure virtual network subnet. Pods receive an IP address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address. This approach greatly reduces the number of IP addresses you need to reserve in your network space for pods to use.
Prerequisites:
The cluster identity used by the AKS cluster must at least have the Network Contributor role on the subnet within your virtual network. CLI helps set the role assignment automatically. If you're using an ARM template or other clients, you need to manually set the role assignment. You must also have the appropriate permissions, such as the subscription owner, to create a cluster identity and assign it permissions. If you want to define a custom role instead of using the built-in Network Contributor role, you need the following permissions:Microsoft.Network/virtualNetworks/subnets/join/actionMicrosoft.Network/virtualNetworks/subnets/read
With kubenet, only the nodes receive an IP address in the virtual network subnet. Pods can't communicate directly with each other. Instead, User Defined Routing (UDR) and IP forwarding handle connectivity between pods across nodes. UDRs and IP forwarding configuration is created and maintained by the AKS service by default, but you can bring your own route table for custom route management if you want. You can also deploy pods behind a service that receives an assigned IP address and load balances traffic for the application. The following diagram shows how the AKS nodes receive an IP address in the virtual network subnet, but not the pods:
Azure supports a maximum of 400 routes in a UDR, so you can't have an AKS cluster larger than 400 nodes. AKS virtual nodes and Azure Network Policies aren't supported with kubenet. Calico Network Policies are supported.
Create AKS Cluster with Kubenet Plugin (Remove Manually Create Route from Subnet)
az aks create --resource-group aks-rg \
--name aks-cluster \
--vnet-subnet-id /subscriptions/ea4c38a0-1234-1111-9999-14f88f278645c/resourceGroups/aks-rg/providers/Microsoft.Network/virtualNetworks/vNet_aks_useast/subnets/aks-subnet \
--network-plugin kubenet \
--pod-cidr 10.244.0.0/16 \
--service-cidr 10.32.0.0/16 \
--dns-service-ip 10.32.0.10 \
--node-count 2 \
--ssh-key-value ~/.ssh/azurekey.pub
az aks update --resource-group aks-rg --name aks-cluster --api-server-authorized-ip-ranges 39.100.100.100/32
az login
az aks get-credentials --resource-group aks-rg --name aks-cluster
Limitations & considerations for kubenet
Azure CNI Network Plugin:
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly. To avoid these planning challenges, it is possible to enable the feature Azure CNI networking for dynamic allocation of IPs and enhanced subnet support.
Unlike kubenet, traffic to endpoints in the same virtual network isn't NAT'd to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
Nodes use the Azure CNI Kubernetes plugin.
Create AKS Cluster with Azure CNI Network Plugin:
az aks create --resource-group aks-rg \
--name aks-cluster \
--vnet-subnet-id /subscriptions/ea4c38a0-1234-5411-1111-14f88f2cf77c/resourceGroups/aks-rg/providers/Microsoft.Network/virtualNetworks/vNet_aks_useast/subnets/aks-subnet \
--network-plugin azure \
--service-cidr 10.32.0.0/16 \
--dns-service-ip 10.32.0.10 \
--node-count 2 \
--ssh-key-value ~/.ssh/azurekey.pub
az aks update --resource-group aks-rg --name aks-cluster --api-server-authorized-ip-ranges 39.51.100.100/32
领英推荐
All pods in default namespace get IPs from main subnet which is 10.100.0.0/16
Azure CNI Overlay Network Plugin:
Azure CNI Overlay represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of VNet IPs to pods. It achieves this by assigning private CIDR IPs to pods, which are separate from the VNet and can be reused across multiple clusters. Unlike Kubenet, where the traffic dataplane is handled by the Linux kernel networking stack of the Kubernetes nodes, Azure CNI Overlay delegates this responsibility to Azure networking.
Create AKS Cluster with Azure CNI Overlay Network Plugin:
az aks create --resource-group aks-rg \
--name aks-cluster \
--vnet-subnet-id /subscriptions/ea4c38a0-1111-1111-9e90-14f88f7855757c/resourceGroups/aks-rg/providers/Microsoft.Network/virtualNetworks/vNet_aks_useast/subnets/aks-subnet \
--network-plugin azure \
--network-plugin-mode overlay \
--pod-cidr 172.17.0.0/16 \
--service-cidr 10.32.0.0/16 \
--dns-service-ip 10.32.0.10 \
--node-count 2 \
--ssh-key-value ~/.ssh/azurekey.pub
az aks update --resource-group aks-rg --name aks-cluster --api-server-authorized-ip-ranges 39.100.100.100/32
Differences between Kubenet and Azure CNI Overlay
Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet, but it has scaling and other limitations. The below table provides a detailed comparison between Kubenet and Azure CNI Overlay. If you don't want to assign VNet IP addresses to pods due to IP shortage, we recommend using Azure CNI Overlay.
IP address planning
Network security groups
Pod to pod traffic with Azure CNI Overlay isn't encapsulated, and subnet network security group rules are applied. If the subnet NSG contains deny rules that would impact the pod CIDR traffic, make sure the following rules are in place to ensure proper cluster functionality (in addition to all AKS egress requirements):
Traffic from a pod to any destination outside of the pod CIDR block utilizes SNAT to set the source IP to the IP of the node where the pod runs.
If you wish to restrict traffic between workloads in the cluster, we recommend using network policies.
Reference: