Secure AKS cluster
When we create a Kubernetes cluster, by default the API server which expose the Kubernetes functionality is assigned a public IP. The communication between the API server and nodes is through public network. The access is controlled using the RBAC and also we can enable secure access by enabling the authorized IP ranges which allows the communication to take place from a particular set of IP ranges defined.
When we create a private AKS cluster, the communication between the API server and nodes remain in the private network. The API server is exposed as a Azure private link service using a private end point and the services is accessed by the nodes privately from their VNETs. More details can be found here. We can create a more secure AKS cluster using this approach.
Let's looks at the resource created for the private AKS cluster. The recommended way of creating the private cluster is to have hub-spoke topology. Since the AKS cluster is accessible only through private IP, we need to have a following components to access a AKS cluster
- A jump server or VM to access the AKS cluster
- Azure firewall - This will allow AKS cluster to access the whitelisted network and applications from the internet. More details can be found here.
These resources are provisioned in the hub VNET.
The firewall will have a public IP address and a private IP address as shown.
The firewall settings with the network rules is shown below. One of the rule can be Azure container registry service tag which will allow access to docker images.
The firewall settings with the application rules will be as shown. One of the rule is to allow access to OS updates.
The hub will also have a UDR which will route all traffic originating from the AKS cluster to private IP of the firewall as shown.
AKS cluster will be created in the spoke VNET as shown below.
When we create a private cluster, the kube-apiserver will be exposed as a private end point with a network interface attached and also a private DNS zone will be created.
The private DNS will have A record which will point to the private IP of the API server.
The AKS VNET which is part of the spoke will be added to the virtual network links automatically as shown below. We need to add the hub VNET to the virtual network links which will perform the DNS resolution from the jump server.
We also need to connect the hub and spoke VNET using the peering as shown.
We can verify the DNS resolution from the jump server VM as follows.
These configuration works when you are using the azure provided DNS by default. In case if you are using a custom DNS server or trying to connect from an on-premise server, the DNS resolution will not work. In such scenarios, you might need to create a VM as a DNS resolver to the Azure IP address 168.63.129.16 and set the conditional forwarder and then link the VM VNET in the private DNS zone.