Kubernetes - Taints, Tolerations, Node Selectors & Node Affinity
Taints, Tolerations, Node Selectors & Node Affinity are mechanisms for pod scheduling onto nodes.
Before we start lets create the kind cluster: Refer this article for installation -
1. Node Selector:
Pods run on nodes with specific labels
Step 1: Add the label to the node
kubectl label nodes simple-multinode-cluster-worker2 prod=true
Step 2: Specify the same label in the nodeSelector field of the pod
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
prod: "true"
Note: Check the Node field you can see the pod is running on simple-multinode-cluster-worker2
2. Taints and Tolerations:
Effects:
Step 1: Add a taint to the node
kubectl taint nodes simple-multinode-cluster-worker prod=true:NoSchedule
Step 2: Add the toleration to the pod:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
tolerations:
- key: prod
operator: Equal
value: true
effect: NoSchedule
To Understand how it works. Let us create only one node and create 2 scenarios:
1. Tainted node and Pod without Toleration.
kubectl taint nodes simple-multinode-cluster-worker prod=true:NoSchedule
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
Note: This pod will not run on the node as it does not have the toleration.
领英推荐
2. Tainted node and Pod with Toleration.
kubectl taint nodes simple-multinode-cluster-worker prod=true:NoSchedule
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
tolerations:
- key: prod
operator: Equal
value: true
effect: NoSchedule
Step 3: Combine Node Selectors and Tolerations:
Is it necessary to combine both? The answer is yes in certain conditions.
Combining Node Selectors with Taints and Tolerations adds two layers of control to pod scheduling. This ensures only specific pods can run on specific nodes.
kubectl label nodes simple-multinode-cluster-worker environment=prod
kubectl taint nodes simple-multinode-cluster-worker environment=prod:NoSchedule
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
prod: true
tolerations:
- key: prod
operator: Equal
value: true
effect: NoSchedule
Use Cases:
3. Node Affinity
Node Affinity is a more advanced and flexible alternative to nodeSelector for controlling pod scheduling.
It allows you to define rules to attract pods to specific nodes based on node labels, using logical operators like In, NotIn, Exists, etc.
Types of Node Affinity
2. preferredDuringSchedulingIgnoredDuringExecution (Soft Rule):
Example: Scheduling Pods with Node Affinity
You want a pod to run on:
kubectl label nodes simple-multinode-cluster-worker environment=prod zone=us-east-1
kubectl label nodes simple-multinode-cluster-worker2 environment=prod zone=us-west-1
1
kubectl label nodes simple-multinode-cluster-worker3 environment=dev zone=us-east-1
apiVersion: v1
kind: Pod
metadata:
name: nginx-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: environment
operator: In
values: [prod]
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: region
operator: In
values: [us-east-1]
Note: Check the Node field you can see the pod is running on simple-multinode-cluster-worker