Kubernetes Resource Quota and LimitRange
Jayesh Tanna
Senior Software Engineer at Microsoft | Aspiring Machine Learning Enthusiast
Kubernetes allows you to manage your application in numerous ways. Consider that your users spread across multiple teams, or projects, at that time we can start thinking about using namespaces within Kubernetes cluster. In Kubernetes,?namespaces?provide a mechanism for isolating groups of resources within a single cluster. Namespaces are way to divide cluster resources in groups for multiple users (via resource-quota). Each namespace will have one or multiple containers running inside it.
After creating namespace for each group / team within cluster, consider that what if one team i.e. namespace consumes more number of resources from cluster like cpu and memory and other team’s resources starves for resources as cluster has very limited amount of available hardware resources. This creates noisy Neighbour problem within cluster.
In a shared Kubernetes environment, it is important to pre-define the allocation of resources for each namespace to avoid unintended resource contention and depletion.
To avoid this as an administrator, first you create a namespace within cluster and then you can use?ResourceQuota?and?LimitRange?to assign?resource quotas on namespaces?and set?limits for containers?running inside any namespace.
The resource quota is the total available resources for a particular namespace, while limit range is used to assign limits for containers running inside the namespace.
Resource Quotas
After creating Namespaces, we can use the ResourceQuota object to limit down the total amount of resource used by the namespace. We can use ResourceQuota to set limits for different object types that can be created within a namespace along with setting quotas for resources like CPU and memory.
A ResourceQuota for setting quota on resources looks like this:
apiVersion: v
kind: ResourceQuota
metadata:
name: teamx-resource-quota
namespace: teamx
spec:
hard:
limits.cpu: 150m
limits.memory: 600Mi
requests.cpu: 150m
requests.memory: 600Mi
limits.cpu?is the maximum CPU limit for all the containers in the Namespace, i.e. the entire namespace.
limits.memory?is the maximum Memory limit for all containers in the Namespace, i.e. the entire namespace.
领英推荐
requests.cpu?is the maximum CPU requests for all the containers in the Namespace. As per the above YAML, total requested CPU in the Namespace should be less than 150m.
requests.memory?is the maximum Memory requests for all the containers in the Namespace. As per the above YAML, Total requested memory in the namespace should be less than 600Mi.
LimitRange for Containers
We can create a LimitRange object in our Namespace which can be used to set limits on resources on containers running within the namespace. This is used to provide default limit values for Pods which do not specify this value themeselves to equally distribute resources within a namespace.
A?LimitRange?provides constraints that can:
apiVersion: v
kind: LimitRange
metadata:
name: teamx-limit-range
spec:
limits:
- default:
memory: 200Mi
cpu: 50m
defaultRequest:
memory: 200Mi
cpu: 50m
max:
memory: 200Mi
cpu: 50m
min:
memory: 200Mi
cpu: 50m
type : Container
The above YAML file has 4 sections,?max,?min,?default, and?defaultRequest.
The?default section?will set up the default?limits?for a container in a pod. Any container with no limits defined will get these values assigned as default.
The?defaultRequest section?will set up the default?requests?for a container in a pod. Any container with no requests defined will get these values assigned as default.
The?max section?will set up the?maximum limits?that a container in a Pod can set. The value specified in the default section cannot be higher than this value.
The?min section?will set up the?minimum Requests?that a container in a Pod can set. The value specified in the defaultRequest section cannot be lower than this value.