Do you need to set Kubernetes CPU limits?

Kubernetes logo

Managing the resources available for your Pods and containers is a best practice step for Kubernetes administration. You should prevent Pods from eagerly using your cluster’s CPU and memory. Overuse by one set of pods can lead to resource conflicts that slow down neighboring containers and destabilize your hosts.

However, Kubernetes resource management is often misunderstood. Two mechanisms are provided to control assignments: requests and limits. This leads to four possible settings per Pod, if you set a request and a limit for both CPU and memory.

Following this simple path is usually not optimal: CPU limits are best left out as they hurt performance and waste spare capacity. This article explains the issue so that you can run a more effective cluster.

How requests and limits work

Requests are used for scheduling. New pods are only assigned to nodes that can fulfill their requests. If there is no matching node, the pod remains in the Pending state until resources are available.

Limits determine the maximum resource usage that the Pod is allowed. When the limit is reached, the Pod can no longer use the resource, even if there is spare capacity on its Node. The actual effect of reaching the limit depends on the resource in question: exceeding a CPU limit results in throttling, while exceeding a memory limit causes the Pod OOM killer to terminate container processes.

In the following example, a pod with these restrictions will: nothing but plans to nodes capable of delivering 500m (equivalent to 0.5 CPU cores). The maximum runtime consumption can be up to 1000 m before throttling if the node has capacity available.

resources:
  requests:
    cpu: 500m
  limits:
    cpu: 1000m

Why CPU Limits Are Dangerous

To understand why CPU limits are problematic, consider what happens if a pod with the resource settings shown above (500m request, 1000m limit) is deployed on a quad-core node with a total CPU capacity of 4000m. For simplicity, no other Pods run on the Node.

$ kubectl get pods -o wide
NAME            READY       STATUS      RESTARTS    AGE     IP              NODE
demo-pod        1/1         Running     0           1m      10.244.0.185    quad-core-node

The Pod immediately schedules the Node because the 500m request is met immediately. The pod enters the Active state. The load can be low with a CPU usage of about a few hundred millicore.

Then there’s a sudden spike in traffic: Requests pour in and the Pod’s effective CPU usage jumps as high as 2,000 feet. Due to the CPU limit, this is reduced to 1000 meters. The Node doesn’t have any other pods, though, so it could deliver the full 2000m, if the pod wasn’t limited by its limit.

The Node’s capacity has been wasted and the Pod’s performance has been unnecessarily reduced. If the CPU limit is omitted, the Pod can use the full 4000m, fulfilling all requests up to four times faster.

No limit still prevents pod resource hogging

Omitting CPU limits doesn’t compromise stability, provided you have the correct requests set up on each Pod. When multiple pods are deployed, each pod’s share of CPU time is scaled in proportion to the request.

Here’s an example of what happens to two pods with no limits when deployed on an 8-core (8000m) node and each requires 100% CPU consumption at the same time:

1 500m 100% 2000m
2 1500m 100% 6000m

If Pod 1 is in a quieter period, Pod 2 can use even more CPU cycles:

1 500m 20% 400m
2 1500m 100% 7600m

CPU requests are still important

These examples show why CPU requests matter. Setting the right requests avoids controversy by ensuring that Pods only plan for Nodes they can support. It also guarantees a weighted distribution of available CPU cycles when multiple Pods experience increased demand.

CPU limits do not provide these benefits. They are only valuable in situations where you want to slow down a Pod above a certain performance threshold. This is almost always undesirable behavior; you claim that your other Pods will always be busy, while they could be idle and create spare CPU cycles in the cluster.

By not setting limits, those cycles can be used by any workload that needs them. This results in better overall performance because available hardware is never wasted.

What about the memory?

Memory is managed in Kubernetes with the same request and limit concepts. However, memory is a physically different resource than CPU usage, which requires its own allocation method. Memory is incompressible: it cannot be revoked once allocated to a container process. Processes share the CPU as it becomes available, but they get individual portions of the memory.

Setting an identical request and limit is the best practice approach for Kubernetes memory management. This allows you to reliably anticipate the total memory consumption of all pods in your cluster.

It may seem logical to set a relatively low request with a much higher limit. However, using this technique for many pods can have a destabilizing effect: if multiple pods exceed their request, your cluster’s memory capacity is reduced. could be exhausted. The OOM killer will intervene to end container processes, potentially disrupting your workloads. Any of your Pods can be targeted for eviction, not just the one that caused the memory to run out.

Using equal requests and limits, a Pod cannot be scheduled unless the Node can provide the necessary memory. It also enforces that the Pod cannot use more memory than its explicit allocation, eliminating the risk of overuse when multiple Pods exceed their requests. Over-utilization will become apparent when you try to schedule a Pod and no Node can fulfill the memory request. The error occurs sooner and more predictably, with no impact on other Pods.

Overview

Kubernetes allows you to differentiate between the amount of resources that a container requiresand an upper bound that it is Allowed to scale up, but not to exceed. However, this mechanism is less useful in practice than it might appear at first glance.

Setting CPU limits prevents your processes from using spare CPU capacity as it becomes available. This unnecessarily slows down performance when a Pod can temporarily use cycles that no neighbor needs.

Use sensible CPU request to avoid scheduling Pods on Nodes that are already too busy to perform well. Leave the limit field unchecked to allow Pods to access additional resources when performing demanding tasks when capacity is available. Finally, assign a memory request and limit to each Pod, making sure to use the same value for both fields. This prevents memory exhaustion and creates a more stable and predictable cluster environment.

Add Comment