Getting to Know Dockerfile Instructions: Part 1

By Alwyn Botha, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

The official Dockerfile reference is excellent as a REFERENCE:

This set of tutorials focus on giving you practical experience of using Dockerfile, focusing on interactively using the shell a lot .

After you complete this set of 4 tutorials, you can spend time reading the official Dockerfile reference. Everything will make a lot more sense since you now have actual practical experience with most of the Dockerfile instructions.

Suggestion: on day one skip those 15 first screenfuls of heavy syntax rules and regulations.

Start at Once you have read through it and played around another day, you are ready to read those 15 screens of information.

Enough theory, let’s get started.


You need access to an Alibaba Cloud Elastic Compute Service instance with a recent version of Docker already installed.

I am writing this tutorial using CentOS. You can use Debian / Ubuntu. 99% of this tutorial will work on any Linux distro since it mostly uses Docker commands. You can refer to this tutorial to learn how to install Docker on your Linux server.

You need a very basic understanding of Docker, images, containers and using docker run and docker ps -a … basic knowledge.

The purpose of this tutorial is to get you to build several very simple Dockerfiles. Your understanding of Docker concepts will grow as you use it.


It will really help if you have only a few ( preferably no ) containers running. That way you can easily find your tutorial container in docker ps -a output lists.

So stop and prune all the containers you do not need running.

You can quickly do that ( in your DEVELOPMENT environment ) using:

To now remove all containers, run

Dockerfile FROM Instruction

This initial part of the tutorial is not very exciting. It only downloads an Alpine version 3.8 image and lets us build our own container using it as base. That is all.

The Dockerfile FROM instruction specifies the base image / Linux distro we want to use to build a container.

Every Dockerfile must have a FROM instruction.

For all our Dockerfile tutorials the tiny Alpine base image is good enough.

I prefer to pull / download images before I need them. ( If you refer to an image in a Dockerfile it will be downloaded automatically — you will see this below. )


Expected output

Its better to have your Dockerfiles in separate directories. All the files and directories that are in the same directory are used during a docker build. Only include the files and directories that you need to build your image. One directory per image: neat, organized, small, efficient, understandable in its entirely due to all these attributes.

Therefore, enter:

There are two editors frequently used in Linux distros: vim and nano. I prefer nano so will be using only it.

To get nano installed:

Debian / Ubuntu :

apt install nano

CentOS :

yum install nano

Nano beginners guide:

  1. nano file-to-edit … this starts the nano editor
  2. cursor keys move around as expected
  3. cut Dockerfile text from alibaba tutorials web pages work as expected
  4. pasting into your Dockerfile work as expected
  5. press F3 to save your work
  6. press F2 to exit the editor.

You are now a nano expert. Those are the only things you need to know to use nano for all 4 of these tutorials.

Use nano to create a Dockerfile like this:

nano Dockerfile


docker build — file Dockerfile .

Expected output

Notice the Sending build context to Docker daemon 84.48kB . This means that the FULL content of the directory wherein our Dockerfile is in is used as build context. We can selectively copy files and directories into our final build image: you will see this in action in the ADD / COPY parts of this tutorial further below.

The FROM alpine:3.8 reads the Alpine 3.8 image we downloaded and adds it to our build image. We now have the tiny Alpine Linux distro inside our build image: a complete distro to be used as base operating system for our application.

Build context is all the files and directories in the same directory as your Dockerfile. Therefore you need to place your Dockerfiles in separate directories. If you have your Dockerfile in your root directory, THE FULL LINUX DISTRO is part of you build context: not a good idea.

Let’s delete our Alpine image to prove that Dockerfile will automate download it when needed.

Expected output :


Noticed the Pulling from library/alpine below. It takes longer to download an image from the Internet.

If you refer to an image in a Dockerfile it will be downloaded automatically. We removed alpine:3.8 image so when we referred to it in a Dockerfile it got downloaded automatically.

Suggestion: I prefer to pull images onto my local machine. Then I can work all day without needing a live Internet connection.


  1. Nano the editor
  2. Build context and the need for separate directories
  3. Use docker pull to download images
  4. Use FROM alpine:3.8 to use the Alpine version 3.8 image
  5. Use docker build — file Dockerfile . to build our own image to use as basis.


For our container to provide a useful functionality it has to contain some software.

During this step we are going to learn how to use ADD and COPY to add files and dirs to our container.

First we need to create some files and tarred / zip files.


The touch command creates empty files.

The tar command adds all files starting with in-tar- into tarredfiles.tar .

  1. The c flag creates a new .tar archive file.
  2. The v flag verbosely show the .tar file progress on file at a time
  3. The f flag names the output file of this archive operation.

We need to create 3 more files and add them to another tar ( zip archive ): more-tarredfiles.tar

We now have 2 tarred archive: tarredfiles.tar and more-tarredfiles.tar

Add this to Dockerfile using

nano Dockerfile

Build the image using

Expected output

This tutorial contains many very tiny Dockerfile examples that you run.

Every next run MUST stop and prune the previous container that is still running. If you do not, then you will get this error message :


So you will see these command frequently: they stop the previous container and deletes it / prune it. docker ps -a then runs to show that no containers are running.

Let’s start up a container to see the result.

Enter ls /root/ at the /# shell prompt to list the files in the current directory.

Expected output

As expected. ADD automate untars tar files.

As expected. COPY does not automate untars tar files.

Easy to remember: ADD does Double Duty … it adds and untars / unzips.

COPY just copies like copy and paste — no extra processing done. A photo copier just copies — no double duty.

Best Practice: only use ADD when you need this double duty functionality.

The ADD tarredfiles.tar /root in the Dockerfile automate untars tarredfiles.tar into /root/.

The COPY more-tarredfiles.tar /root in the Dockerfile merely copies more-tarredfiles.tar into /root/.

You use the exact same syntax to copy individual files into your container.

You can get considerable more information on ADD and COPY at

At least now you have practical experience using those 2 commands. Reading those official Docker reference docs will make more sense now.

Using chown to Change File Ownership

This section shows how you use chown to change file ownerships with ADD or COPY.

Let’s create 2 files using:

Add this to Dockerfile using

nano Dockerfile

Build the image using

Let’s start up a container to see the result.

Expected output

What happened : extract from the Dockerfile:

Notice the first copy command uses a username and a groupname to change ownerships.

Notice the second copy uses a UID and a GID to change ownerships. 35 is the UID for the games user id.

GID = group identifier (GID)

UID = user identifier (GID)

If you peek at your password file you will find long text user IDs and those UIDs. We humans use easy text user names, but in the kernel uses only short, efficient numbers: UIDs.

Entering just those 2 commands in a Dockerfile and creating a running container from the image allowed you to learn more about — chgown compared to reading 5 paragraphs of theory.

I used COPY to copy one file on each line, not ADD.

Recursive Copy of dirs Using Dockerfile COPY

Purpose: demo recursive copy of dirs using Dockerfile COPY

First we need to install a tiny but useful tool to display nested directories.

The tree command is a useful tool to neatly show nested directories as a tree structure.

I am using CentOS so I need to run:

( On Ubuntu / Debian you need: apt install tree )

The mkdir creates directories. The -p automatically creates parent directories as required.

The beginner way to create those directories would have been:

You could use man mkdir at your shell to read about all the features of mkdir.

Let’s see what that nested directories looks like


Expected output

Now run ls -R demo-directories at your shell. See the difference. Tree is more understandable.

Let’s add some files into each of those dirs.

It would have been cool if we could ONLY have used these touch commands if touch could automatically create those nested directories. Unfortunately it does not. So we need to create directories first, then touch to create files in the directories.


Expected output

We now have one file in each of those dirs.

Edit the Dockerfile to look like this:

The . at the end of the COPY command means current dir. Since we used WORKDIR demo-work-dir the current dir is demo-work-dir

Therefore the COPY will copy all files from demo-directories into demo-work-dir.


Expected output

Build complete, run it:

Docker will show container id of the running container.

Let’s enter the container to see the effect of the build. Enter ls -R at the # shell prompt.

Expected output

Notice we are working in the demo-work-dir. This dir got created by WORKDIR demo-work-dir in the Dockerfile.

The COPY demo-directories/ . command in the Dockerfile copied the full recursive content of demo-directories/ into demo-work-dir .

Important: notice the demo-directories dir itself does not get copied, only its contents.

If you wanted the demo-directories dir itself created in the container, you would specifyWORKDIR demo-directories in the Dockerfile. WORKDIR creates dirs if it does not exist.

To truly test you understand the above, rename both




Then change the Dockerfile to work with your new dir names. Then rebuild the image, run it, enter the container and look for content in your new dir.

If that works, good. Now make a deliberate mistake with one of those dir names. Rerun all the above. Read the error messages you get and fix it. NOW you understand WORKDIR and COPY better than any theory can teach you.

This concludes Part 1 of 4: Get to know all the Dockerfile Instructions. Continue reading Part 2 to learn more.


Follow me to keep abreast with the latest technology news, industry insights, and developer trends.