Using Multiple Schedulers in Kubernetes
Kubernetes comes with a default scheduler that assigns pods to nodes. However, you may want to use multiple schedulers in your cluster for special scheduling requirements. For example, having dedicated schedulers for GPU nodes, dividing load between schedulers, or running experimental schedulers.
In this tutorial, we will cover how to set up and configure multiple Kubernetes schedulers to run side-by-side. This involves creating configuration files for each scheduler, giving them unique names, disabling leader election, and specifying which scheduler to use in pod specifications.
We will walk through a detailed example of creating two schedulers - one for GPU nodes and the default Kubernetes scheduler. You will see how to configure each scheduler, run them simultaneously, and schedule pods to the desired scheduler.
Running multiple schedulers provides flexibility to implement custom scheduling logic in Kubernetes. You can optimize pod placement and resource utilization with specialized schedulers tailored to your workload needs. We will cover the key steps involved in setting this up in a clear and concise way.
By the end, you will understand how to leverage multiple schedulers in your own Kubernetes cluster to enhance scheduling capabilities. This powerful functionality allows you to easily add custom scheduling logic as needed.
Overview of Kubernetes Scheduling
The Kubernetes scheduler is a control plane component that watches for newly created pods and assigns them to nodes. The scheduler considers factors like resource requirements, hardware constraints, affinity/anti-affinity rules, and more to decide the optimal node for each pod.
By default, Kubernetes comes with a scheduler that implements the default scheduling algorithm. The configuration for this default scheduler is located at /etc/kubernetes/manifests/kube-scheduler.yaml on the master node(s).
Kubernetes allows you to run multiple scheduler instances side-by-side, using different scheduling policies and algorithms. Each scheduler must be configured with a unique name to identify it.
Running Multiple Schedulers
Here are the main steps to set up multiple schedulers:
Let's go through a detailed example to see this in action.
Example: GPU and Default Schedulers
We will create two schedulers in this example:
First, create a configuration file for the GPU scheduler called gpu-scheduler.yaml:
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: my-gpu-scheduler
plugins:
- name: DefaultPreemption
enabled:
- name: '*'
# Other plugins like predicates and priorities
This configures a new scheduler named my-gpu-scheduler. We would configure its predicates and priorities to schedule pods to GPU nodes.
Next, create the pod specs for both schedulers. Name the GPU scheduler pod gpu-scheduler.yaml:
apiVersion: v1
kind: Pod
metadata:
name: gpu-scheduler
namespace: kube-system
spec:
containers:
- name: gpu-scheduler
image: k8s.gcr.io/kube-scheduler:v1.14.0
command:
- kube-scheduler
- --config=/etc/kubernetes/gpu-scheduler.yaml
- --scheduler-name=my-gpu-scheduler
hostNetwork: true
And name the default scheduler pod default-scheduler.yaml:
领英推荐
apiVersion: v1
kind: Pod
metadata:
name: default-scheduler
namespace: kube-system
spec:
containers:
- name: default-scheduler
image: k8s.gcr.io/kube-scheduler:v1.14.0
command:
- kube-scheduler
- --scheduler-name=default-scheduler
hostNetwork: true
Note the --scheduler-name flag that gives each pod a unique identifier.
Next, disable leader election for both schedulers by adding the --leader-elect=false flag. This allows them to run simultaneously:
command:
- kube-scheduler
- --leader-elect=false
# Other flags...?
Now create the pods:
kubectl create -f gpu-scheduler.yaml
kubectl create -f default-scheduler.yaml
Both schedulers will now be running in the cluster.
Finally, to schedule a pod with the GPU scheduler, specify the name in the pod spec:
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod
spec:
schedulerName: my-gpu-scheduler
#...
Pods without a schedulerName will go to the default scheduler.
And that's it! You now have multiple schedulers running independently in your cluster. You can continue adding more to meet your specific scheduling requirements.
Some key points to keep in mind:
This provides very flexible and granular control over pod scheduling in Kubernetes.
Conclusion
In this tutorial, we walked through an example of configuring multiple Kubernetes schedulers, including a GPU scheduler and the default scheduler.
We covered the main steps involved - creating configuration files for each scheduler, giving them unique names, disabling leader election, and specifying the desired scheduler in pod specs.
You saw how to get multiple scheduler instances running independently and scheduling pods as needed. This provides greater flexibility compared to relying solely on the default scheduler.
Some key takeaways are that you can now add custom scheduling logic by creating new schedulers tailored to your use cases. Scheduling can be optimized for factors like hardware requirements, affinity rules, and more.
Additionally, you can leverage multiple schedulers to divide load or isolate certain workloads if needed. Schedulers can run specialized policies for particular pod groups.
In summary, supporting multiple schedulers unlocks more powerful scheduling capabilities in Kubernetes. You can easily enhance scheduling as your needs grow by adding custom schedulers. This tutorial provided a hands-on example to demonstrate configuring and using multiple schedulers effectively.
?