Kubernetes : Assign CPU Resource Defaults and Limits to Containers

Alibaba Cloud
12 min readMay 29, 2019

By Alwyn Botha, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

This tutorial demonstrates the following :

  • How to declare the CPU resource requirements of your Pods
  • How Kubernetes administrators define default CPU resource requirements
  • How Kubernetes administrators define CPU resource usage limits

CPU resource usage limits are needed to prevent any one Pod from totally consuming ALL the CPU time on one node ( server running Kubernetes and capable of hosting Pods ). You can use this functionality in development to prevent developers from accidentally hogging all CPU resources.

You can use this functionality in production — to ensure no accidental runaway process in one Pod from hogging all CPU resources — preventing everything else from doing productive work.

This tutorial will cover the following topics:

  • Pod declares CPU request and limits
  • Pod defines using half a CPU — Millicores syntax
  • Pod defines using half a CPU — fractional syntax
  • Pod declares no CPU request and limits
  • CPU LimitRanges
  • LimitRange with defaults and min, max values
  • Pod that does not adhere to CPU limits
  • LimitRanges and Namespaces

1) Pod Declares CPU Request and Limits

Pods can declare how much CPU they need.

See example below:

  • Pod declares it needs 1 full CPU at maximum
  • Pod declares it needs half a CPU at minimum

CPU resource limits are defines using two syntaxes:

  • 1, 2, 2.5 … defines 1, 2 and 2.5 CPUs
  • 1000m, 2500m, 150m … defines 1 CPU, 2.5 CPUs and 0.150 CPUs

The second syntax uses Millicores. 1000m equals one CPU on all computers.

( One Millicores is 1/1000 of a CPU, therefore 1000m equals 1 CPU )

A four core server has a CPU capacity of 4000m.

Millicores syntax is easy to read: 150m versus 0.150. Use the syntax you prefer. Have a standard at your company of which to use.

The Pod below defines it needs 500m minimum and 1 CPU maximum.

nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The CPU Bench Pod is Running ; sleep 3600']

resources:
limits:
cpu: "1"
requests:
cpu: 500m

restartPolicy: Never

Create the Pod.

kubectl create -f myBench-Pod.yaml 

pod/mybench-pod created

We now need to test how those CPU limits are enforced.

We do that using sysbench — a Linux benchmark utility.

Syntax:

sysbench — threads=1 — cpu-max-prime=8005005 — verbosity=0 cpu run

  • — threads=1 … run using 1 thread
  • — cpu-max-prime=8005005 … calculation of prime numbers up to this value
  • — verbosity=0 … do not show details while doing the run
  • cpu run … run the CPU run benchmark included in sysbench

( I randomly played with cpu-max-prime value till I got one that takes around 15 seconds to run. That is enough time to switch to the other console and do a screen capture of the top command output. )

We defined that our Pod needs 1 full CPU to do its work. We can test that using sysbench inside our Pod.

Use kubectl exec to ssh into the Pod. There we can run sysbench on the shown Linux Pod shell prompt.

kubectl exec mybench-pod -i -t -- /bin/bash

Throughout this tutorial enter the sysbench command as shown in these output blocks.

sysbench --threads=1 --cpu-max-prime=8005005 --verbosity=0 cpu runTasks: 203 total,   1 running, 202 sleeping,   0 stopped,   0 zombie
%Cpu0 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 0.0 us, 1.7 sy, 0.0 ni, 98.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 0.0 us, 1.7 sy, 0.0 ni, 98.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
31866 root 20 0 87.8m 5.1m 100.0 0.3 0:03.37 S sysbench --threads=1 --cpu-max-prime=8005005 --verbosity+

As expected, our Pod uses 1 full CPU at 100%. You can see that in the 4 individual core detail lines as well in the process detail line of our thread.

In the test below we let sysbench run 2 threads.

How to filter top command output:

  • press letter o … this starts its filter functionality
  • in our specific case: enter COMMAND=sysbench and press enter
  • only sysbench processes shown
  • press l to hide the load average line
  • press m to hide the memory and swap line.
sysbench --threads=2 --cpu-max-prime=8005005 --verbosity=0 cpu runTasks: 203 total,   1 running, 202 sleeping,   0 stopped,   0 zombie
%Cpu0 : 51.4 us, 0.0 sy, 0.0 ni, 48.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 50.0 us, 0.0 sy, 0.0 ni, 50.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 1.4 us, 0.0 sy, 0.0 ni, 98.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 0.0 us, 0.0 sy, 0.0 ni, 98.6 id, 0.0 wa, 0.0 hi, 1.4 si, 0.0 st
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
32589 root 20 0 87.9m 5.1m 102.7 0.3 0:03.61 S sysbench --threads=2 --cpu-max-prime=8005005 --verbosity+

As expected, our Pod uses 1 full CPU overall at 100%.

You can see that in the 4 individual core detail lines that each thread runs on its own CPU — each using 50% CPU — to keep total usage at 100%

You will see that more clearly in tests below using 3 and 4 threads.

Test using 3 threads:

sysbench --threads=3 --cpu-max-prime=8005005 --verbosity=0 cpu runTasks: 203 total,   1 running, 202 sleeping,   0 stopped,   0 zombie
%Cpu0 : 32.5 us, 0.9 sy, 0.0 ni, 66.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 32.2 us, 0.0 sy, 0.0 ni, 67.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 32.2 us, 0.9 sy, 0.0 ni, 67.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 0.9 us, 0.9 sy, 0.0 ni, 98.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
335 root 20 0 88.0m 5.0m 97.4 0.3 0:08.41 S sysbench --threads=3 --cpu-max-prime=8005005 --verbosity+

As expected, our Pod still uses 1 full CPU overall at 100%.

You can see that in the 4 individual core detail lines that each of the 3 threads run on its own CPU — each using 33% CPU — to keep total usage at 100%

sysbench --threads=4 --cpu-max-prime=8005005 --verbosity=0 cpu runTasks: 203 total,   1 running, 202 sleeping,   0 stopped,   0 zombie
%Cpu0 : 25.5 us, 0.9 sy, 0.0 ni, 73.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 26.1 us, 0.9 sy, 0.0 ni, 72.1 id, 0.0 wa, 0.0 hi, 0.9 si, 0.0 st
%Cpu2 : 26.6 us, 0.9 sy, 0.0 ni, 72.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 26.4 us, 0.9 sy, 0.0 ni, 71.8 id, 0.0 wa, 0.0 hi, 0.9 si, 0.0 st
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
678 root 20 0 88.0m 5.1m 100.9 0.3 0:05.22 S sysbench --threads=4 --cpu-max-prime=8005005 --verbosity+

As expected, our Pod still uses 1 full CPU overall at 100%.

You can see that in the 4 individual core detail lines that each of the 4 threads run on its own CPU — each using 25% CPU — to keep total usage at 100%

Basic demo completed, delete Pod.

kubectl delete -f myBench-Pod.yaml --force --grace-period=0pod "mybench-pod" force deleted

I use — force — grace-period=0 to delete Pod immediately. Do not use in production. By default Pods get 30 seconds to do their shutdown routines ( after receiving delete command ).

2) Pod Defines Using Half a CPU — Millicores Syntax

nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The CPU Bench Pod is Running ; sleep 3600']

resources:
limits:
cpu: "500m"
requests:
cpu: 500m

restartPolicy: Never

Note CPU limit is 500m = half a CPU.

Create the Pod.

kubectl create -f myBench-Pod.yaml 

pod/mybench-pod created

Exec into Pod:

kubectl exec mybench-pod -i -t -- /bin/bash

Run benchmark.

sysbench --threads=1 --cpu-max-prime=8005005 --verbosity=0 cpu runTasks: 208 total,   1 running, 207 sleeping,   0 stopped,   0 zombie
%Cpu0 : 1.1 us, 1.1 sy, 0.0 ni, 97.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 38.6 us, 1.1 sy, 0.0 ni, 60.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 11.5 us, 0.0 sy, 0.0 ni, 88.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 1.1 us, 0.0 sy, 0.0 ni, 98.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
4210 root 20 0 87.8m 5.1m 50.0 0.3 0:02.55 S sysbench --threads=1 --cpu-max-prime=8005005 --verbosity+

Pod got limited to half a CPU — works as expected.

Demo completed, delete Pod.

kubectl delete -f myBench-Pod.yaml --force --grace-period=0pod "mybench-pod" force deleted

3) Pod Defines Using Half a CPU — Fractional Syntax

Same Pod as before, only using 0.5 to define half a CPU.

nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The CPU Bench Pod is Running ; sleep 3600']

resources:
limits:
cpu: .5
requests:
cpu: .5

restartPolicy: Never

Create the Pod.

kubectl create -f myBench-Pod.yaml 

pod/mybench-pod created

Enter Pod:

kubectl exec mybench-pod -i -t -- /bin/bash

Run sysbench:

sysbench --threads=1 --cpu-max-prime=8005005 --verbosity=0 cpu runTasks: 208 total,   1 running, 207 sleeping,   0 stopped,   0 zombie
%Cpu0 : 1.5 us, 1.5 sy, 0.0 ni, 96.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 0.0 us, 1.6 sy, 0.0 ni, 98.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 0.0 us, 1.6 sy, 0.0 ni, 98.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 52.3 us, 0.0 sy, 0.0 ni, 47.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
5639 root 20 0 87.8m 5.1m 50.8 0.3 0:01.49 S sysbench --threads=1 --cpu-max-prime=8005005 --verbosity+

Identical to previous test ( proving 500m = half a CPU )

Investigate how Kubernetes understands our Pod YAML spec.

( Output truncated — leaving only the good stuff. Rest of tutorial always only show relevant describe output )

kubectl describe pod/mybench-podName:               mybench-pod
Containers:
mybench-container:
State: Running
Started: Fri, 11 Jan 2019 09:17:09 +0200
Ready: True
Limits:
cpu: 500m
Requests:
cpu: 500m

Shows Millicores values, not 0.5

kubectl delete -f myBench-Pod.yaml --force --grace-period=0pod "mybench-pod" force deleted

4) Pod Declares No CPU Request and Limits

Pod below has no reference to limits.

nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The CPU Bench Pod is Running ; sleep 3600']

restartPolicy: Never

Create the Pod.

kubectl create -f myBench-Pod.yaml 

pod/mybench-pod created

Enter Pod and run sysbench test.

kubectl exec mybench-pod -i -t -- /bin/bashsysbench --threads=4 --cpu-max-prime=8005005 --verbosity=0 cpu run%Cpu0  : 95.3 us,  4.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.4 si,  0.0 st
%Cpu1 : 97.8 us, 1.8 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.4 si, 0.0 st
%Cpu2 : 96.8 us, 2.9 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.4 si, 0.0 st
%Cpu3 : 97.8 us, 1.4 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.7 si, 0.0 st
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
9142 root 20 0 88.0m 5.1m 392.4 0.3 0:33.63 S sysbench --threads=4 --cpu-max-prime=8005005 --verbosity+

Reckless Pod hogs all 4 cores at 100%. This is why you need CPU limits.

Delete this Pod:

kubectl delete -f myBench-Pod.yaml --force --grace-period=0pod "mybench-pod" force deleted

5) CPU LimitRanges

We need CPU default limits that automatically limits ALL Pods on a node.

LimitRanges fits the bill.

LimitRange below adds default limits ALL Pods that do not declare their own limits.

nano mycpu-limit-range.yamlapiVersion: v1
kind: LimitRange
metadata:
name: mycpu-limit-range
spec:
limits:
- default:
cpu: 0.75
defaultRequest:
cpu: 0.25
type: Container

Create the LimitRange.

kubectl create -f mycpu-limit-range.yamllimitrange/mycpu-limit-range created

We reuse the Pod spec from previous example : no limits in its spec.

nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The CPU Bench Pod is Running ; sleep 3600']

restartPolicy: Never

Create the Pod.

kubectl create -f myBench-Pod.yaml 

pod/mybench-pod created

Describe how Kubernetes executed our desired Pod spec.

kubectl describe pod/mybench-podName:               mybench-podAnnotations:        kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container mybench-container; cpu limit for container mybench-containerContainers:
mybench-container:
State: Running
Started: Fri, 11 Jan 2019 09:48:24 +0200
Ready: True
Limits:
cpu: 750m
Requests:
cpu: 250m

Note the annotation :

LimitRanger automatically added a cpu limit for container mybench-container

The limits listed agree with the LimitRange spec.

Best Practice Have LimitRanges at your development shop.

Delete this specific LimitRange.

kubectl delete limits/mycpu-limit-rangelimitrange "mycpu-limit-range" deleted

6) LimitRange with Defaults and Min, Max Values

Previously we used LimitRanges defaults.

LimitRanges also define min and max limits. See spec below:

3 examples of new Pod creations follow so you can see how this works. ( My tutorials teach via examples rather than long text descriptions with few demos. )

nano mycpu-limit-range.yamlapiVersion: v1
kind: LimitRange
metadata:
name: mycpu-limit-range
spec:
limits:
- default:
cpu: 0.75
defaultRequest:
cpu: 0.25
max:
cpu: "2000m"
min:
cpu: "200m"
type: Container

Create our LimitRange

kubectl create -f mycpu-limit-range.yaml

Define our Pod.

nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The CPU Bench Pod is Running ; sleep 3600']

resources:
limits:
cpu: "1000m"
requests:
cpu: 300m

restartPolicy: Never

Create the Pod.

kubectl create -f myBench-Pod.yaml 

pod/mybench-pod created

This Pod does not need defaults — it declares its own.

This Pod spec limit fall within the allowed min-max LimitRange defined above . Pod create command went smoothly — nothing else to show here.

Describe Pod shows Pod created exactly as we specified.

kubectl describe pod/mybench-podName:               mybench-podContainers:
mybench-container:
State: Running
Ready: True
Limits:
cpu: 1
Requests:
cpu: 300m

7) Pod That Does Not Adhere to CPU Limits

Edit Pod spec.

nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The CPU Bench Pod is Running ; sleep 3600']

resources:
limits:
cpu: "3000m"
requests:
cpu: 100m

restartPolicy: Never

Note our CPU request is below LimitRange min

Note our CPU limit is above LimitRange max

Attempt to create the Pod:

kubectl create -f myBench-Pod.yamlError from server (Forbidden): error when creating "myBench-Pod.yaml": pods "mybench-pod" is forbidden: [minimum cpu usage per Container is 200m, but request is 100m., maximum cpu usage per Container is 2, but limit is 3.]

Error is as expected.

Modify Pod spec to be within limits.

nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The CPU Bench Pod is Running ; sleep 3600']

resources:
limits:
cpu: "1500m"
requests:
cpu: 250m

restartPolicy: Never

Create the Pod.

kubectl create -f myBench-Pod.yaml 

pod/mybench-pod created

Pod create worked. It falls within min and max limits.

Delete Pod and LimitRange .

kubectl delete -f myBench-Pod.yaml --force --grace-period=0pod "mybench-pod" force deleted

and …

kubectl delete limits/mycpu-limit-rangelimitrange "mycpu-limit-range" deleted

8) LimitRanges and Namespaces

LimitRanges live in Namespaces

If you are familiar with Namespaces all you need to know is that LimitRanges you create work in the Namespace you are currently in.

You should divide your development server into several Namespaces — per development team. Different teams may need different CPU and RAM limits. That is the purpose of Namespaces : separation of resources visibility based on your needs.

If you do not define any Namespaces and no LimitRanges your server is WIDE open to CPU and RAM abuse — deliberate and accidental.

For more information https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/

Conclusion

CPU defaults and limits are easy to understand, so let’s put this knowledge to test. Create your own LimitRange and modify your own Pod with:

  • Limit and request
  • Limit and without request
  • No limit and with request

Enter Pod via kubectl exec and run sysbench. Exceed self-declared limits and LimitRange max to learn how all this interacts.

Here are several other exercises for you to try:

  • Using just one Pod, request much more CPU cores than those available on your node.
  • Create 3 Pods that in total request CPU resources that exceed the number of CPU cores on the node
  • Create 3 Pods that in total limit CPU resources that exceed the number of CPU cores on the node
  • Since Pod may have several containers, create a Pod with 3 containers (each with different limits) and investigate how those limits are enforced. For example, if 3 containers in a Pod request 1 CPU each, the total Pod demand is 3 CPUs. You must attempt to run 3 Pods simultaneously if you want to test Pods that collectively exceed node cpu capability.

Hopefully by trying out these exercises, you can accurately predict any behavior for any Pod when given any LimitRange spec / definition.

Reference:https://www.alibabacloud.com/blog/kubernetes-assign-cpu-resource-defaults-and-limits-to-containers_594832?spm=a2c41.12911512.0.0

--

--

Alibaba Cloud

Follow me to keep abreast with the latest technology news, industry insights, and developer trends. Alibaba Cloud website:https://www.alibabacloud.com