Practical Exercises for Docker Compose: Part 4

By Alwyn Botha, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

This set of tutorials focuses on giving you practical experience on using Docker Compose when working with containers on Alibaba Cloud Elastic Compute Service (ECS).

Part 3 of this series explored depends_on, volumes and the important init docker-compose options. In Part 4, we will look at some productivity tips and best practices of running Docker compose limits.

Productivity Tips

I have several bash aliases defined for frequently used Docker commands. Here are just two:

Typing psa at shell is quicker than highlighting docker ps -a text in tutorial, then copying it, then alt-tab to console window, then pasting.

Faster docker-compose.yml edits:

  1. Install autocopy extension if you use Chrome. It automatically copies text you highlight in your browser
  2. Highlight docker-compose.yml text in tutorial
  3. Alt tab to Linux console
  4. type nnc ( editor opens docker-compose.yml file )
  5. Right click to paste ( this will be specific to the console software you use )
  6. Save

Without using extensions and aliases, this process would involve around 12 steps instead of the 6 above.

Deploy: placement constraints

The docker-compose placement constraints are used to limit the nodes / servers where a task can be scheduled / ran by defining constraints.

First we need to define some labels for our node / server. Then we can define placement constraints based on those labels.

Syntax to add a label to a node:

docker node update — label-add label-name=label-value hostname-of-node

We need our server hostname for this. Enter hostname at shell to get YOUR hostname.

Use your hostname below: ( I used localhost.localdomain = my hostname )

We can inspect our node to see those labels exist.

head -n 13 only shows the top / head 13 lines of the long inspect output.

Expected output :

Now we add the placement contraints right at bottom of docker-compose.yml.

Add the following to your docker-compose.yml using

The stack deploy command will only place our stack of services on nodes that match both constraints. So its an AND match — adding more constraints will be AND matched. ( There is no OR and no nested brackets like in nearly all programming languages. )

Since both constraints do match our node, the deploy will be successful.

Let’s list running containers:

Success. Let’s list all the services in mystack.

docker stack services mystack

Success. See REPLICAS column: 1 running service out of 1 requested service.

Let’s now change the constraint tests so that the deploy cannot be done.

Add the following to your docker-compose.yml using

Remove previously deployed stack:

Let’s list all the services in mystack.

Deploy did not succeed in finding any suitable nodes. See REPLICAS column: 0 running service out of 1 requested service.

Examples of how node labels can be used:

  1. Label nodes with ssds so that certain apps that require ssds can run only on such nodes.
  2. Label nodes with graphics cards so that apps can find such nodes
  3. Label nodes with country, state, city names as required
  4. Separate batch and realtime applications
  5. Run development jobs only on development machines.
  6. Run your under-development applications only on your physical computer — keep code that hogs cpu and ram from negatively affecting your colleagues.

You can find more information about placemente here:

and about constraints here:

Deploy: replicas

Specify the number of containers that should be running at any given time.

Till now you ran with just one replica — the default.

Let’s demo running 3 replicas.

Add the following to your docker-compose.yml using

Remove previous running mystack

Expected output :

Deploy our new stack:

Expected output :

Output does not look promising — no mention of 3 replicas. Let’s list the services in mystack

Expected output :

3 replicas running out of 3 requested. Success.

Let’s list running containers:

As expected: 3 containers running.

Investigate our server work load via top

Each of our 3 tiny containers use around 2.5 MB ram residently.

You now have 3 full Alpine Linux distros running in isolated environments — 2.5 MB in ram size each. Impressive.

Compare this to having 3 separate virtual machines. Each such VM would need 50 MB overhead just to exist. Plus it would need several 100 MBs of diskspace each.

Each container started up in around 300 ms, which is not possible with VMs.

Deploy: resources: reservations cpu ( over provision )

We use the reservations: cpu config settings to reserve cpu capacity.

Let’s over-provision cpu to see if Docker follows our instructions.

Add the following to your docker-compose.yml using

My server has 2 core, so 2 cpus are available.

In this configuration I am attempting to provision 6 * .5 = 3 cpus.

You need to edit those settings to cause this to fail on your tiny laptop or at your monster employer super-server.

Let’s remove existing stacks.

Let’s attempt deployment:


Sometimes the above happens, just rerun the deploy till error no longer appears.

Investigate the result of the deploy:

4 containers listed. This makes sense : 4 * .5 = 2 cpus used.

List all services in mystack:

Only 4 of 6 containers provisioned: Docker ran out of cpus to provision.

Important: this was a reservation provision. It can only reserve what exists.

Deploy: resources: reservations: RAM ( over provision )

Let’s over provision RAM. ( We will use this functionality correctly later in this tutorial. )

Add the following to your docker-compose.yml using

My server has 1 GB ram available.

In this configuration I am attempting to provision 6 * 2 = 12 MB.

As before: you need to edit those settings to cause this to fail on your tiny laptop or at your monster employer super-server.

Let’s remove existing stacks.

Let’s attempt deployment:

To list running stack services run:

Expected output :

Zero services deployed. Docker does not even deploy one container using 50% of the reserved ram specified. It assumes correctly — if you specify a RAM reservation your container needs that MINIMUM to run successfully. Therefore if the reservation is impossible the container does not start.

We have seen resources limits are obeyed.

Let’s define reasonable limits to see how it works.

Deploy: resources: limits: cpu

The Alpine service below is constrained to use no more than 20M of memory and 0.50 (50%) of available processing time (CPU).

Add the following to your docker-compose.yml using

Note we only need one replica from here onwards.

Deploy our stack:

Expected output :

Our container is running. Let’s enter it and benchmark cpu speed.

Enter commands shown at the container / # prompt:

Benchmark explanation:

  1. time: measures elapsed time: shows those 3 timer lines
  2. dd if=/dev/urandom bs=1M count=2: copies bs (blocksize) one MB of randomness twice
  3. md5sum: calculates md5 hashes ( give cpu a load )

We have bench times for cpu limit 0.5 — which means nothing if we cannot compare it.

So let us change cpu limit to 0.25 in docker-compose.yml

Then run at shell:

Rerun our bencmark:

Results make sense — around 50% slower — with 50% less cpu power available.

Quick final test: 100% cpu power

So let us change cpu limit to 1.00 in docker-compose.yml

then run at shell:

Enter commands shown:

Very fast runtimes: 100% cpu limit is 4 times faster than 25% cpu power

You now have experienced that limiting cpu power per container works as expected.

If you have only one production server, you can use this knowledge to run cpu-hungry batch processes on the same server as other work — just limit the batch process cpu severely.

Resources: limits: memory

This configuration option limits max RAM usage for your container.

Add the following to your docker-compose.yml using


Expected output :

We now have a running container with a memory limit of 4MB.

Let’s be a typical inquisitive Docker administrator and see what happens if we use 8 MB /dev/shm RAM .

Enter commands as shown:

Explanation of what happened above:

First we run df -h to determine /dev/shm size and usage:

Then we add 4MB to /dev/shm:

recheck its usage — see 4M used:

Then we add 8MB to /dev/shm — overwriting previous contents:

and this command gets killed.

check /dev/shm usage again.

It used slightly over 4MB before container ran out of RAM.

Conclusion: Docker docker-compose memory limits get enforced.

By default containers have UNLIMITED ram usage available to them. Therefore use this resource limit to prevent your ram from being totally consumed by runaway containers.

Even if you do not know precisely what a good limit is set it anyway: 20, 50, 100 MB are all better than letting 240 GB be consumed.


Follow me to keep abreast with the latest technology news, industry insights, and developer trends.