Networking Concepts in Azure Kubernetes Service (AKS) - Part1

Networking Concepts in Azure Kubernetes Service (AKS) - Part1

In a container-based, microservices approach to application development, application components work together to process their tasks. Kubernetes provides various resources enabling this cooperation:

  • You can connect to and expose applications internally or externally.
  • You can build highly available applications by load balancing your applications.
  • You can restrict the flow of network traffic into or between pods and nodes to improve security.
  • You can configure Ingress traffic for SSL/TLS termination or routing of multiple components for your more complex applications. Prerequisites:

Create vNet with Subnet ,NSG & Azure Routes

az group list -o table
# Create Resource Group

az group create --name aks-rg --location eastus
# Create Virtual Network & default Subnet
az network vnet create -g aks-rg \
                    -n vNet_aks_useast \
                    --address-prefix 10.0.0.0/8 \
                    --subnet-name aks-subnet \
                    --subnet-prefix 10.100.0.0/16

az network vnet subnet list --resource-group aks-rg --vnet-name vNet_aks_useast -o table

az network vnet show --resource-group aks-rg --name vNet_aks_useast

Get Subnet vNet ID:

az network vnet subnet show \
                           --resource-group aks-rg \
                           --vnet-name  vNet_aks_useast \
                           --name aks-subnet \
                           --query id \
                           -o tsv
az network vnet subnet show -g aks-rg -n aks-subnet --vnet-name vNet_aks_useast

/subscriptions/ea4c38a0-2746-9999-1111-14f8885743f77c/resourceGroups/aks-rg/providers/Microsoft.Network/virtualNetworks/vNet_aks_useast/subnets/aks-subnet                       
                          
az network nsg list  -o table
az network nsg create --resource-group aks-rg --name aks-nsg  --location eastus 

az network vnet subnet update  --resource-group aks-rg \
--vnet-name vNet_aks_useast --name aks-subnet --network-security-group aks-nsg

az network route-table create -g  aks-rg -n aks-route-table

az network vnet subnet update  --resource-group aks-rg \
--vnet-name vNet_aks_useast --name aks-subnet --route-table aks-route-table        

Kubenet Network Plugin:

AKS clusters use kubenet and create an Azure virtual network and subnet for you by default. With kubenet, nodes get an IP address from the Azure virtual network subnet. Pods receive an IP address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address. This approach greatly reduces the number of IP addresses you need to reserve in your network space for pods to use.

Prerequisites:

  • The virtual network for the AKS cluster must allow outbound internet connectivity.
  • Don't create more than one AKS cluster in the same subnet.
  • AKS clusters can't use 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24 for the Kubernetes service address range, pod address range, or cluster virtual network address range. The range can't be updated after you create your cluster.

The cluster identity used by the AKS cluster must at least have the Network Contributor role on the subnet within your virtual network. CLI helps set the role assignment automatically. If you're using an ARM template or other clients, you need to manually set the role assignment. You must also have the appropriate permissions, such as the subscription owner, to create a cluster identity and assign it permissions. If you want to define a custom role instead of using the built-in Network Contributor role, you need the following permissions:Microsoft.Network/virtualNetworks/subnets/join/actionMicrosoft.Network/virtualNetworks/subnets/read

With kubenet, only the nodes receive an IP address in the virtual network subnet. Pods can't communicate directly with each other. Instead, User Defined Routing (UDR) and IP forwarding handle connectivity between pods across nodes. UDRs and IP forwarding configuration is created and maintained by the AKS service by default, but you can bring your own route table for custom route management if you want. You can also deploy pods behind a service that receives an assigned IP address and load balances traffic for the application. The following diagram shows how the AKS nodes receive an IP address in the virtual network subnet, but not the pods:

Azure supports a maximum of 400 routes in a UDR, so you can't have an AKS cluster larger than 400 nodes. AKS virtual nodes and Azure Network Policies aren't supported with kubenet. Calico Network Policies are supported.


Create AKS Cluster with Kubenet Plugin (Remove Manually Create Route from Subnet)

az aks create --resource-group aks-rg \
              --name aks-cluster \
              --vnet-subnet-id /subscriptions/ea4c38a0-1234-1111-9999-14f88f278645c/resourceGroups/aks-rg/providers/Microsoft.Network/virtualNetworks/vNet_aks_useast/subnets/aks-subnet \
              --network-plugin kubenet \
              --pod-cidr 10.244.0.0/16 \
              --service-cidr 10.32.0.0/16 \
              --dns-service-ip 10.32.0.10 \
              --node-count 2 \
              --ssh-key-value ~/.ssh/azurekey.pub

az aks update --resource-group aks-rg --name aks-cluster --api-server-authorized-ip-ranges 39.100.100.100/32        
az login

az aks get-credentials --resource-group aks-rg --name aks-cluster        

Limitations & considerations for kubenet

  • An additional hop is required in the design of kubenet, which adds minor latency to pod communication.
  • Route tables and user-defined routes are required for using kubenet, which adds complexity to operations.
  • Direct pod addressing isn't supported for kubenet due to kubenet design.
  • Unlike Azure CNI clusters, multiple kubenet clusters can't share a subnet.
  • AKS doesn't apply Network Security Groups (NSGs) to its subnet and doesn't modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic between the node and pod CIDR. For more details, see Network security groups.
  • Features not supported on kubenet include:Azure network policiesWindows node poolsVirtual nodes add-on

Azure CNI Network Plugin:

With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly. To avoid these planning challenges, it is possible to enable the feature Azure CNI networking for dynamic allocation of IPs and enhanced subnet support.

Unlike kubenet, traffic to endpoints in the same virtual network isn't NAT'd to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.

Nodes use the Azure CNI Kubernetes plugin.

Create AKS Cluster with Azure CNI Network Plugin:

az aks create --resource-group aks-rg \
              --name aks-cluster \
              --vnet-subnet-id /subscriptions/ea4c38a0-1234-5411-1111-14f88f2cf77c/resourceGroups/aks-rg/providers/Microsoft.Network/virtualNetworks/vNet_aks_useast/subnets/aks-subnet \
              --network-plugin azure \
              --service-cidr 10.32.0.0/16 \
              --dns-service-ip 10.32.0.10 \
              --node-count 2 \
              --ssh-key-value ~/.ssh/azurekey.pub

az aks update --resource-group aks-rg --name aks-cluster --api-server-authorized-ip-ranges 39.51.100.100/32        

All pods in default namespace get IPs from main subnet which is 10.100.0.0/16

Azure CNI Overlay Network Plugin:

Azure CNI Overlay represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of VNet IPs to pods. It achieves this by assigning private CIDR IPs to pods, which are separate from the VNet and can be reused across multiple clusters. Unlike Kubenet, where the traffic dataplane is handled by the Linux kernel networking stack of the Kubernetes nodes, Azure CNI Overlay delegates this responsibility to Azure networking.

Create AKS Cluster with Azure CNI Overlay Network Plugin:

az aks create --resource-group aks-rg \
              --name aks-cluster \
              --vnet-subnet-id /subscriptions/ea4c38a0-1111-1111-9e90-14f88f7855757c/resourceGroups/aks-rg/providers/Microsoft.Network/virtualNetworks/vNet_aks_useast/subnets/aks-subnet \
              --network-plugin azure \
              --network-plugin-mode overlay \
              --pod-cidr 172.17.0.0/16 \
              --service-cidr 10.32.0.0/16 \
              --dns-service-ip 10.32.0.10 \
              --node-count 2 \
              --ssh-key-value ~/.ssh/azurekey.pub

az aks update --resource-group aks-rg --name aks-cluster --api-server-authorized-ip-ranges 39.100.100.100/32        

Differences between Kubenet and Azure CNI Overlay

Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet, but it has scaling and other limitations. The below table provides a detailed comparison between Kubenet and Azure CNI Overlay. If you don't want to assign VNet IP addresses to pods due to IP shortage, we recommend using Azure CNI Overlay.

IP address planning

  • Cluster Nodes: When setting up your AKS cluster, make sure your VNet subnet has enough room to grow for future scaling. Keep in mind that clusters can't scale across subnets, but you can always add new node pools in another subnet within the same VNet for extra space. A /24subnet can fit up to 251 nodes since the first three IP addresses are reserved for management tasks.
  • Pods: The Overlay solution assigns a /24 address space for pods on every node from the private CIDR that you specify during cluster creation. The /24 size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure the private CIDR is large enough to provide /24 address spaces for new nodes to support future cluster expansion.When planning IP address space for pods, consider the following factors:The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.Pod CIDR space must not overlap with the cluster subnet range.Pod CIDR space must not overlap with directly connected networks (like VNet peering, ExpressRoute, or VPN). If external traffic has source IPs in the podCIDR range, it needs translation to a non-overlapping IP via SNAT to communicate with the cluster.
  • Kubernetes service address range: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than /12. This range shouldn't overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
  • Kubernetes DNS service IP address: This IP address is within the Kubernetes service address range that's used by cluster service discovery. Don't use the first IP address in your address range, as this address is used for the kubernetes.default.svc.cluster.local address.

Network security groups

Pod to pod traffic with Azure CNI Overlay isn't encapsulated, and subnet network security group rules are applied. If the subnet NSG contains deny rules that would impact the pod CIDR traffic, make sure the following rules are in place to ensure proper cluster functionality (in addition to all AKS egress requirements):

  • Traffic from the node CIDR to the node CIDR on all ports and protocols
  • Traffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)
  • Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS)

Traffic from a pod to any destination outside of the pod CIDR block utilizes SNAT to set the source IP to the IP of the node where the pod runs.

If you wish to restrict traffic between workloads in the cluster, we recommend using network policies.

Reference:

https://learn.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking

https://learn.microsoft.com/en-us/azure/aks/azure-cni-overlay

https://learn.microsoft.com/en-us/azure/aks/configure-kubenet

https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni?source=recommendations&tabs=configure-networking-portal




要查看或添加评论,请登录

Aslam Chandio的更多文章

社区洞察

其他会员也浏览了