Kubernetes: Assign Memory Resources and Limits to Containers

  • How to assign memory resources to a Pod when you define it
  • How Kubernetes administrators can put Limits on the memory use of Pods (during runtime and when they define a Pod)
  • Pod with memory request and limit
  • Pod with 2 containers: each with memory request and limit
  • Pod exceed RAM limit upon startup : restartPolicy: Never
  • Pod exceed RAM limit upon startup : restartPolicy: OnFailure
  • LimitRange for memory
  • Pod that requests RAM above and below limits
  • Define LimitRange defaults and limits
  • Pod does not specify RAM limits in its YAML spec file
  • LimitRange in namespaces
  • Kubernetes API objects

1) Pod with Memory Request and Limit

Note the syntax below. This is how we define:

  • 50 Mi memory requested
  • 100 Mi memory limit — the max RAM we declare our Pod will ever need
nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Bench Pod is Running ; sleep 3600']
resources:
limits:
memory: "100Mi"
requests:
memory: "50Mi"

restartPolicy: Never
kubectl create -f myBench-Pod.yaml 

pod/mybench-pod created
  • — vm 1 … start one virtual worker thread
  • — vm-bytes … allocate 50MB of RAM
  • — vm-hang … randomly waits 10 seconds and re-allocate around 50 MB of ram
kubectl exec mybench-pod -i -t -- /bin/bash# stress --vm 1 --vm-bytes 50M --vm-hang 10
PID USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND
21149 root 20 0 57.2m 50.4m 0.0 2.7 0:00.11 S stress --vm 1 --vm-bytes 50M --vm-hang 10
21148 root 20 0 7.1m 0.9m 0.0 0.0 0:00.00 S stress --vm 1 --vm-bytes 50M --vm-hang 10
# stress --vm 1 --vm-bytes 90M --vm-hang 10
PID USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND
21875 root 20 0 97.2m 90.5m 0.0 4.9 0:00.02 S stress --vm 1 --vm-bytes 90M --vm-hang 10
21874 root 20 0 7.1m 0.9m 0.0 0.0 0:00.00 S stress --vm 1 --vm-bytes 90M --vm-hang 10
PID USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND
22143 root 20 0 102.2m 95.3m 0.0 5.1 0:00.02 S stress --vm 1 --vm-bytes 95M --vm-hang 10
22142 root 20 0 7.1m 0.6m 0.0 0.0 0:00.00 S stress --vm 1 --vm-bytes 95M --vm-hang 10
[root@mybench-pod /]# stress --vm 1 --vm-bytes 97M --vm-hang 10
stress: info: [78] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [78](415) <-- worker 79 got signal 9
stress: WARN: [78](417) now reaping child worker processes
stress: FAIL: [78](451) failed run completed in 0s
[root@mybench-pod /]# exit
Status:             Running
State: Running
Started: Thu, 10 Jan 2019 07:47:43 +0200
Ready: True
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Events:
Type Reason Age From Message
Normal Scheduled 12m default-scheduler Successfully assigned default/mybench-pod to minikube
Normal Pulled 12m kubelet, minikube Container image "centos:bench" already present on machine
Normal Created 12m kubelet, minikube Created container
Normal Started 12m kubelet, minikube Started container
kubectl delete -f myBench-Pod.yaml --force --grace-period=0pod "myapp-pod" force deleted

2) Pod with 2 Containers: Each with Memory Request and Limit

Now we have 2 containers with memory limits. Detail in YAML file below:

nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container-1
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo mybench-container-1 is Running ; sleep 3600']
resources:
limits:
memory: "100Mi"
requests:
memory: "50Mi"

- name: mybench-container-2
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo mybench-container-2 is Running ; sleep 3600']
resources:
limits:
memory: "100Mi"
requests:
memory: "50Mi"
restartPolicy: Never
kubectl create -f myBench-Pod.yaml 

pod/mybench-pod created
[root@mybench-pod /]# stress --vm 1 --vm-bytes 88M --vm-hang 10stress: info: [22] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
kubectl exec mybench-pod -c mybench-container-1 -i -t -- /bin/bash[root@mybench-pod /]# stress --vm 1 --vm-bytes 95M --vm-hang 10
stress: info: [23] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
PID USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND
26346 root 20 0 102.2m 95.4m 0.8 5.1 0:00.04 S stress --vm 1 --vm-bytes 95M --vm-hang 10
26244 root 20 0 95.2m 88.4m 0.0 4.7 0:00.01 S stress --vm 1 --vm-bytes 88M --vm-hang 10
26345 root 20 0 7.1m 0.8m 0.0 0.0 0:00.00 S stress --vm 1 --vm-bytes 95M --vm-hang 10
26243 root 20 0 7.1m 0.8m 0.0 0.0 0:00.00 S stress --vm 1 --vm-bytes 88M --vm-hang 10
[root@mybench-pod /]# stress --vm 1 --vm-bytes 120M --vm-hang 10
stress: info: [24] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [24](415) <-- worker 25 got signal 9
stress: WARN: [24](417) now reaping child worker processes
stress: FAIL: [24](451) failed run completed in 0s
kubectl delete -f myBench-Pod.yaml --force --grace-period=0pod "myapp-pod" force deleted

3) Pod Exceed RAM Limit upon Startup : restartPolicy: Never

We have seen that the overall status of Pod is unaffected by failing threads inside it.

command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "10"]
nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container-1
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "10"]
resources:
limits:
memory: "100Mi"
requests:
memory: "50Mi"
restartPolicy: Never
kubectl create -f myBench-Pod.yaml 

pod/mybench-pod created
kubectl get poNAME          READY   STATUS      RESTARTS   AGE
mybench-pod 0/1 OOMKilled 0 9s
Status:             Failed
State: Terminated
Reason: OOMKilled
Exit Code: 1
Started: Thu, 10 Jan 2019 08:30:13 +0200
Finished: Thu, 10 Jan 2019 08:30:14 +0200
Ready: False
Restart Count: 0

4) Pod Exceed RAM Limit upon Startup : restartPolicy: OnFailure

Investigate what happens with restartPolicy: OnFailure

nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container-1
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "10"]
resources:
limits:
memory: "100Mi"
requests:
memory: "50Mi"
restartPolicy: OnFailure
kubectl create -f myBench-Pod.yaml 

pod/mybench-pod created
kubectl get po
NAME READY STATUS RESTARTS AGE
mybench-pod 0/1 OOMKilled 1 2s
kubectl get po
NAME READY STATUS RESTARTS AGE
mybench-pod 0/1 CrashLoopBackOff 1 6s
kubectl get po
NAME READY STATUS RESTARTS AGE
mybench-pod 0/1 CrashLoopBackOff 1 12s
kubectl get po
NAME READY STATUS RESTARTS AGE
mybench-pod 0/1 CrashLoopBackOff 1 15s
kubectl get po
NAME READY STATUS RESTARTS AGE
mybench-pod 0/1 OOMKilled 2 24s
kubectl get po
NAME READY STATUS RESTARTS AGE
mybench-pod 0/1 CrashLoopBackOff 2 35s
kubectl get po
NAME READY STATUS RESTARTS AGE
mybench-pod 0/1 OOMKilled 3 45s
kubectl delete -f myBench-Pod.yaml --force --grace-period=0pod "myapp-pod" force deleted

5) LimitRange for Memory

Kubernetes administrators can define RAM limits for their nodes.

nano myRAM-LimitRange.yamlapiVersion: v1
kind: LimitRange
metadata:
name: my-ram-limit
spec:
limits:
- max:
memory: 200Mi
min:
memory: 25Mi
kubectl create -f myRAM-LimitRange.yaml
kubectl create -f myRAM-LimitRange.yamlError from server (BadRequest): error when creating "myRAM-LimitRange.yaml": LimitRange in version "v1" cannot be handled as a LimitRange: v1.LimitRange.Spec: v1.LimitRangeSpec.Limits: []v1.LimitRangeItem: v1.LimitRangeItem.Min: Max: unmarshalerDecoder: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte of ...|y":"250MB"},"min":{"|..., bigger context ...|fault"},"spec":{"limits":[{"max":{"memory":"250MB"},"min":{"memory":"25MB"},"type":"Container"}]}}

6) Pod That Requests RAM above and below Limits

Some namespaces or nodes may be dedicated to vast Pods. No tiny Pods allowed.

nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Bench Pod is Running ; sleep 3600']
resources:
limits:
memory: "100Mi"
requests:
memory: "10Mi"

restartPolicy: Never
kubectl create -f myBench-Pod.yamlError from server (Forbidden): error when creating "myBench-Pod.yaml": pods "mybench-pod" is forbidden: minimum memory usage per Container is 25Mi, but request is 10Mi.
nano myBench-Pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Bench Pod is Running ; sleep 3600']
resources:
limits:
memory: "300Mi"
requests:
memory: "30Mi"

restartPolicy: Never
kubectl create -f myBench-Pod.yamlError from server (Forbidden): error when creating "myBench-Pod.yaml": pods "mybench-pod" is forbidden: maximum memory usage per Container is 250Mi, but limit is 300Mi.
nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Bench Pod is Running ; sleep 3600']
resources:
limits:
memory: "100Mi"
requests:
memory: "30Mi"

restartPolicy: Never
kubectl create -f myBench-Pod.yaml 

pod/mybench-pod created
kubectl exec mybench-pod -i -t -- /bin/bash[root@mybench-pod /]# stress --vm 1 --vm-bytes 50M --vm-hang 10
stress: info: [22] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
^C
[root@mybench-pod /]# stress --vm 1 --vm-bytes 90M --vm-hang 10
stress: info: [24] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
^C
[root@mybench-pod /]# stress --vm 1 --vm-bytes 120M --vm-hang 10
stress: info: [26] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [26](415) <-- worker 27 got signal 9
stress: WARN: [26](417) now reaping child worker processes
stress: FAIL: [26](451) failed run completed in 1s
[root@mybench-pod /]# exitkubectl delete -f myBench-Pod.yaml --force --grace-period=0pod "myapp-pod" force deleted
kubectl delete limits/my-ram-limit
limitrange "my-ram-limit" deleted

7) Define LimitRange Defaults and Limits

Previously our LimitRange only defined limits.

apiVersion: v1
kind: LimitRange
metadata:
name: my-ram-limit
spec:
limits:
- default:
memory: 150Mi
defaultRequest:
memory: 30Mi
max:
memory: 250Mi
min:
memory: 25Mi
type: Container
limits:
- defaultLimit:
memory: 150Mi
defaultRequest:
memory: 30Mi
limits:
- default:
memory: 150Mi
defaultRequest:
memory: 30Mi
kubectl create -f myRAM-LimitRange.yamllimitrange/my-ram-limit created
kubectl describe limitsName:       my-ram-limit
Namespace: default
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory 25Mi 250Mi 30Mi 150Mi -

8) Pod Does Not Specify Ram Limits in Its YAML Spec File

See no limits in spec below:

nano myBench-Pod.yamlapiVersion: v1
kind: Pod
metadata:
name: mybench-pod
spec:
containers:
- name: mybench-container
image: mytutorials/centos:bench
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Bench Pod is Running ; sleep 3600']

restartPolicy: Never
kubectl create -f myBench-Pod.yamlpod/mybench-pod created
Name:               mybench-podAnnotations:        kubernetes.io/limit-ranger:
LimitRanger plugin set: memory request for container mybench-container; memory limit for container mybench-container
Status: Running
IP: 172.17.0.6
Containers:
mybench-container:
...
Limits:
memory: 150Mi
Requests:
memory: 30Mi
...
kubectl exec mybench-pod -i -t -- /bin/bash[root@mybench-pod /]# stress --vm 1 --vm-bytes 140M --vm-hang 10
stress: info: [27] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
^C
[root@mybench-pod /]# stress --vm 1 --vm-bytes 160M --vm-hang 10
stress: info: [29] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [29](415) <-- worker 30 got signal 9
stress: WARN: [29](417) now reaping child worker processes
stress: FAIL: [29](451) failed run completed in 0s
[root@mybench-pod /]# exit

9) LimitRange in Namespaces

Throughout this tutorial we used these limits in the default namespace.

10) Kubernetes API Objects

Some online Kubernetes documentation stress the underlying architecture when describing any Kubernetes topic.

Conclusion

This tutorial does not provide an exhaustive list of all the possibilities of using limits. Instead, it provides you with sufficient information for you to understand the its underlying logic. In summary, we learned that:

  • A Pod can self-declare memory limits
  • LimitRange can define min and max memory limits that are enforced for all Pods
  • LimitRange can define default and max memory limits that are auto added to Pod specs at Pod creation time ( if Pod does not self-declare its memory limits )

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alibaba Cloud

Alibaba Cloud

Follow me to keep abreast with the latest technology news, industry insights, and developer trends. Alibaba Cloud website:https://www.alibabacloud.com