Cluster API - Azure
This tool provides a consistent way of creating the kubernetes cluster across bare metal,onprem and various cloud environments.
It uses a kubernetes cluster to create kubernetes clusters and it applies the kubernetes style of API reconciliation for creating the kubernetes cluster.
It implements the various components required to create the kubernetes cluster as custom resource defintions CRD's.
To start provisioning the kubernetes or worload cluster, we would require first a management cluster and for this we can use a kind cluster. More details can be found here. The workload cluster can be thought of two components
Control plane - This specifies the control plane provider like kubeadm and the infrastructure provider like azure for control plane .The control plane provider kubeadm will have reference to cloud provider machine template which will be used for creating the control plane machines
The CRD's for the control plane are as follows
Cluster,AzureCluster,KubeadmControlPlane,AzureMachineTemplate
apiVersion: cluster.x-k8s.io/v1alpha3 kind: Cluster metadata: labels: cni: calico name: capi-quickstart namespace: default spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 controlPlaneRef: apiVersion: controlplane.cluster.x-k8s.io/v1alpha3 kind: KubeadmControlPlane name: capi-quickstart-control-plane infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 kind: AzureCluster name: capi-quickstart --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 kind: AzureCluster metadata: name: capi-quickstart namespace: default spec: location: centralus networkSpec: vnet: name: capi-quickstart-vnet resourceGroup: capi-quickstart subscriptionID: 526be93c-8b93-4ca3-a34f-559d10cdcef4 --- apiVersion: controlplane.cluster.x-k8s.io/v1alpha3 kind: KubeadmControlPlane metadata: name: capi-quickstart-control-plane namespace: default spec: infrastructureTemplate: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 kind: AzureMachineTemplate name: capi-quickstart-control-plane kubeadmConfigSpec: clusterConfiguration: apiServer: extraArgs: cloud-config: /etc/kubernetes/azure.json cloud-provider: azure extraVolumes: - hostPath: /etc/kubernetes/azure.json mountPath: /etc/kubernetes/azure.json name: cloud-config readOnly: true timeoutForControlPlane: 20m controllerManager: extraArgs: allocate-node-cidrs: "false" cloud-config: /etc/kubernetes/azure.json cloud-provider: azure cluster-name: capi-quickstart extraVolumes: - hostPath: /etc/kubernetes/azure.json mountPath: /etc/kubernetes/azure.json name: cloud-config readOnly: true etcd: local: dataDir: /var/lib/etcddisk/etcd diskSetup: filesystems: - device: /dev/disk/azure/scsi1/lun0 extraOpts: - -E - lazy_itable_init=1,lazy_journal_init=1 filesystem: ext4 label: etcd_disk - device: ephemeral0.1 filesystem: ext4 label: ephemeral0 replaceFS: ntfs partitions: - device: /dev/disk/azure/scsi1/lun0 layout: true overwrite: false tableType: gpt files: - contentFrom: secret: key: control-plane-azure.json name: capi-quickstart-control-plane-azure-json owner: root:root path: /etc/kubernetes/azure.json permissions: "0644" initConfiguration: nodeRegistration: kubeletExtraArgs: cloud-config: /etc/kubernetes/azure.json cloud-provider: azure name: '{{ ds.meta_data["local_hostname"] }}' joinConfiguration: nodeRegistration: kubeletExtraArgs: cloud-config: /etc/kubernetes/azure.json cloud-provider: azure name: '{{ ds.meta_data["local_hostname"] }}' mounts: - - LABEL=etcd_disk - /var/lib/etcddisk useExperimentalRetryJoin: true replicas: 3 version: v1.19.1 --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 kind: AzureMachineTemplate metadata: name: capi-quickstart-control-plane namespace: default spec: template: spec: dataDisks: - diskSizeGB: 256 lun: 0 nameSuffix: etcddisk location: centralus osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux sshPublicKey: ""
vmSize: Standard_D2s_v3
Data plane - This we can think of similar to kubernetes deployments. Instead of kubernetes pods, it manages the machines.This specify the bootstrap provider for joining the worker node to the control plane like kubeadm and also the machine template used for the worker nodes.
The CRD's are as follows
MachineDeployment,KubeadmConfigTemplate,AzureMachineTemplate
apiVersion: cluster.x-k8s.io/v1alpha3 kind: MachineDeployment metadata: name: capi-quickstart-md-0 namespace: default spec: clusterName: capi-quickstart replicas: 3 selector: matchLabels: null template: spec: bootstrap: configRef: apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3 kind: KubeadmConfigTemplate name: capi-quickstart-md-0 clusterName: capi-quickstart infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 kind: AzureMachineTemplate name: capi-quickstart-md-0 version: v1.19.1 --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 kind: AzureMachineTemplate metadata: name: capi-quickstart-md-0 namespace: default spec: template: spec: location: centralus osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux sshPublicKey: "" vmSize: Standard_D2s_v3 --- apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3 kind: KubeadmConfigTemplate metadata: name: capi-quickstart-md-0 namespace: default spec: template: spec: files: - contentFrom: secret: key: worker-node-azure.json name: capi-quickstart-md-0-azure-json owner: root:root path: /etc/kubernetes/azure.json permissions: "0644" joinConfiguration: nodeRegistration: kubeletExtraArgs: cloud-config: /etc/kubernetes/azure.json cloud-provider: azure name: '{{ ds.meta_data["local_hostname"] }}' useExperimentalRetryJoin: true
We can manage all the kubernetes cluster as infra as code in a source control and upgrading a cluster to a newer version without downtime in application or adding a node to control plane and worker node will be just changing the configuration file.
More details for setting up the cluster can be found here.
More details of the concepts can be found here.