Docker Container Resource Management: CPU, RAM and IO: Part 2

By Alwyn Botha, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

This tutorial aims to give you practical experience of using Docker container resource limitation functionalities on an Alibaba Cloud Elastic Compute Service (ECS) instance.

cpu-shares Proportional to Other Containers

In this test, CPU-shares are proportional to other containers. The 1024 default value has no intrinsic meaning.

If all containers have CPU-shares = 4 they all equally share CPU times.

This is identical to all containers have CPU-shares = 1024 they all equally share CPU times.


Investigate logs:

Prune containers, we are done with them.

Note they still all ran the same time. They did not run 4/1024 slower.

cpu-shares: Only Enforced When CPU Cycles Are Constrained

cpu-shares are only enforced when CPU cycles are constrained

With no other containers running defining CPU-shares for one container is meaningless.

Now increase shares to 4000 and rerun — see — zero difference in runtime.

One single container is using all available CPU time: no sharing needed.

Prune this one container, we are done with it.

— cpus= Defines How Much of the Available CPU Resources a Container Can Use

Specify how much of all the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set — cpus=”1.5", the container is guaranteed at most one and a half of the CPUs.

Note the range of — CPUs values we are using in the commands below. Run it:

Investigate docker stats

Expected output :

mycpu.1, mycpu.25 and mycpu.5 perfectly demonstrate the restrictions applied.

However mycpu1 and mycpu2 does not have 100 + 200 % additional CPUs available. Therefore their settings are ignored and they equal share remaining CPU time.

— cpus Number of CPUs

The — cpus setting defines the number of CPUs a container may use.

For the purposes of Docker and Linux distros CPUs are defined as:

CPUs = Threads per core X cores per socket X sockets

CPUs are not physical CPUs.

Let’s investigate my server to determine its number of CPUs.

Information not needed removed:

CPUs = Threads per core X cores per socket X sockets

CPUs = 1 x 2 x 1 = 2 CPUs

Confirm with:

2 core id = 2 cores per socket
2 processors = 2 cpus

Ok this server has 2 CPUs. Your server will be different, so consider that when you investigate all the tests done below.

The — cpus setting defines the number of CPUs a container may use.

Let’s use both CPUs, just one, a half and a quarter CPU and record runtimes for CPU-heavy workload.

Note — cpus=2

Expected output :

We have nothing to compare against. Let’s run the other tests.

Note — cpus=1

Note — cpus=.5

Note — cpus=.25

— cpus=2 realtime: 3.6 sec
— cpus=1 realtime: 3.5 sec
— cpus=.5 realtime: 9.9 sec
— cpus=.25 realtime: 19.5 sec

Our simple benchmark does not effectively use 2 CPUs simultaneously.

Half a CPU runs twice as slow, a quarter CPU runs 4 times slower.

The — CPUs setting works. If the applications inside your containers are unable to multithread / use more than 1 CPU effectively, allocate just one CPU.

— cpu-period and — cpu-quota

Obsolete options. If you use Docker 1.13 or higher, use — cpus instead.

— cpu-period Limit CPU CFS (Completely Fair Scheduler) period
— cpu-quota Limit CPU CFS (Completely Fair Scheduler) quota

Our exercises above clearly show how easy it is to use — CPUs setting.

— cpuset-cpus CPUs in which to allow execution (0–3, 0,1)

— cpuset-cpus — CPUs in which to allow execution (0–3, 0,1)

Unfortunately my server only has 2 CPUs and we saw moments ago using more than 1 CPU has no effect ( using THIS SPECIFIC benchmark ).

If your server has several CPUs you can run much more interesting combinations of — cpuset settings. No it will not be useful: THIS SPECIFIC benchmark uses only 1 thread.

Later in this tutorial there are tests using sysbench ( an actual benchmark tool ) that allows you to specify number of threads.

Here are my results: no difference using 2 CPUs, just cpu 1, just CPU 0.

Test Container Limits Using Real Benchmark Applications

All tests done above were done using quick hacks.

To properly test the resource limits of containers we need real Linux bench applications.

I am used to using CentOS so will be using that as the basis of our bench container. Both bench applications are available on Debian / Ubuntu as well. You could easily translate yum installs to apt-get installs and get identical results.

We need to install 2 bench applications in our container. The best way is to build an image with those applications included.

Therefore create a dockerbench directory:

The first install adds the percona yum repo — the home of sysbench.

Yum then installs sysbench

The curl adds the EPEL yum repo — the home of stress.

Yum then installs stress

Build our bench image. It will take about a minute — based on Internet yum downloads + yum Dependency Resolution and the usual other activities.

If you do not have CentOS 7 image downloaded already it may take another minute.

Now we have a CentOS bench image ready to for repeated use ( with 2 bench tools installed ).

docker run -it — rm centos:bench /bin/sh

— cpus Tested with Sysbench Tool


sysbench — threads=2 — events=4 — cpu-max-prime=800500 — verbosity=0 cpu run

  • — threads=2 … run 2 threads so we can compare 2 CPUs versus 1 CPU
  • — events=4 … do 4 runs
  • — cpu-max-prime=800500 … calculate prime numbers up to 800500
  • — verbosity=0 … do not show detailed output
  • cpu run … run the test named CPU

Via experiments I determined 800500 to be a nice value to run tests quick enough on my 10 year old computer. CPUmark 700. I added the 5 in there since that many zero digits are difficult to read.

2 CPUs:

Real is wall clock time — time from start to finish of the sysbench: 1.9 seconds.

User is the amount of CPU time spent in user-mode code (outside the kernel) within sysbench. 2 CPUs were used: each used 1.9 seconds CPU time, so total user time is time added for each CPU.

The elapsed wall clock time is 1.9 seconds. Since 2 CPUs worked simultaneously / concurrently their summarized time is shown as user time.

Sys is the amount of CPU time spent in the kernel doing system calls.

One CPU:

A more convenient way to run these comparisons is to run the bench command right on the docker run line.

Let’s rerun one CPU this way:

Let’s run half a CPU this way:

Results make perfect sense:

  • 2 CPUs : real 0m1.952s
  • 1 CPU : real 0m4.659s
  • 5 CPUs : real 0m10.506s

With sysbench in our image it makes such tests very easy and quick. Mere seconds and you now have experience limiting Docker containers CPU usage.

Quite frankly waiting 10.506s for the .5 CPU test is too long — especially if you have a many multicore server.

If you did this on a development server at work the CPU load can change drastically on over an elapsed minute. Developers could have compiled during the 2 second 2 CPU run and the server could be CPU-quiet for the — long 5 seconds — 1 CPU run totally skewing our numbers.

We need to have an approach that is somewhat robust against such changing circumstances. Every test must run as quickly as possible and directly one after the other.

Sounds promising, lets try that. Reduce the max prime number 100 fold.

Cut and paste all 3 these instructions in one go and observe results:

Expected output :

Benchmark startup overhead overwhelms the wall-clock real times. Tests hopelessly too short.

After 3 private experiments decreasing the original workload by 10 fold seems perfect.

( The 5 in there is just to make long strings of 000000 more readable. )

Ratios look perfect. Overall runtime is less than a second which minimizes effects of changing CPU-load effects on the development server upon our test timings.

Spend a few minutes playing on your server to get and understanding of what is explained here.

Note I used — rm on the run command. This auto-removes the container after it finishes the command handed to it via /bin/sh.


Follow me to keep abreast with the latest technology news, industry insights, and developer trends.