Technical Best Practices for Container Log Processing

By Bruce Wu


Docker, Inc. (formerly known as dotCloud, Inc) released Docker as an open source project in 2013. Then, container products represented by Docker quickly became popular around the world due to multiple features, such as their good isolation performance, high portability, low resource consumption, and rapid startup. The following figure shows the search trends for Docker and OpenStack starting from 2013.

Image for post
Image for post
  1. After entering the container era, you have more objects to manage than virtual machines and physical machines. Troubleshooting problems by logging on to the target containers will complicate the problems and increase the cost.
  2. The container technology allows you to implement microservices easier. It brings about more components when it decouples the system. You need a technology to help you comprehensively understand the running status of the system, quickly locate the problem, and accurately restore the context.

Log Processing

This article describes some common methods and best practices for container log processing by taking Docker as an example. Concluded by the Alibaba Cloud Log Service team through hard work in the log processing field for many years, these methods and practices are:

  1. Query analysis and visualization
  2. Context analysis of logs
  3. LiveTail — tail-f on the cloud

Real-time Collection of Container Logs

Container Log Types

To collect logs, you must first figure out where the logs are stored. This article shows you how to collect NGINX and Tomcat container logs.

Standard Output

Logging Drivers

Standard output of containers is subject to unified processing of the logging driver. As shown in the following figure, different logging drivers write the standard output to different destinations.

Image for post
Image for post
# You can use the following command to configure a syslog logging driver for all containers at the level of docker daemon.
dockerd -–log-driver syslog –-log-opt syslog-address=udp://
# You can use the following command to configure a syslog logging driver for the current container
docker run -–log-driver syslog –-log-opt syslog-address=udp:// alpine echo hello world


Using logging drivers other than json-file and journald will make the docker logs API unavailable. Assume that you use portainer to manage containers on the host and use a logging driver other than json-file and journald to collect container logs. You will not be able to view the standard output of container logs through the user interface.

Docker Logs API

For containers that use logging drivers by default, you can obtain the standard output of container logs by sending the docker logs command to the docker daemon. Log collection tools that use this method include logspout and sematext-agent-docker. You can use the following command to obtain the latest five container logs starting from 2018-01-01T15:00:00.

docker logs --since "2018-01-01T15:00:00" --tail 5 <container-id>


If you apply this method when there are many logs, you will significantly strain the docker daemon. As a result, the docker daemon will not be able to promptly respond to commands for creating and removing containers.

json-file Files

Logging drivers write container logs in Json format by default to the host file at /var/lib/docker/containers/<container-id>/<container-id>-json.log. This allows you to obtain the standard output of container logs by directly collecting the host file.

Text Log

Mount the Host File Directory

The simplest way to collect text log files in containers is to mount the host file directory to the directory of container logs. You can do this by using the bind mounts or volumes method when you start the container, as shown in the following figure.

Image for post
Image for post

Calculate the Mount Point of the Container Rootfs

Collecting container logs by mounting the host file directory is a little intrusive to the application, because this needs you to run the mounting command when you start the container. If the log collection process is transparent to users, that would be perfect. In fact, this can be achieved by calculating the mount point of the container rootfs.

Image for post
Image for post

Logtail Solution

After comprehensively comparing various container log collection methods, and summarizing and sorting user feedback and appeals, the Log Service team develops an all-in-one solution for processing container logs.

Image for post
Image for post


The Logtail solution has the following features:

  1. Supports auto discovery of containers. That is, after you have configured the target logs to be collected, when a container that meets the conditions is created, target logs of this container will be collected automatically.
  2. Supports specifying containers by using the Docker label and filtering environment variables. Supports whitelist and blacklist mechanisms.
  3. Supports automatically tagging data. That is, adding data source identification information to the collected logs, such as container name, container IP address, and file path.
  4. Supports collecting K8s container logs.

Core Competitiveness

  1. Ensures the at-least-once semantics by using the checkpoint mechanism and deploying additional monitoring processes.
  2. Logtail has withstood multiple double 11 and double 12 shopping festivals, and has been deployed on over one million clients within Alibaba Group. The stability and performance are guaranteed.

Collection of K8s Container Logs

Logtail is deeply integrated with the K8s ecosystem, and is able to conveniently collect K8s container logs. This is another features of Logtail.

  1. Supports collection configuration management through CustomResourceDefinition (CRD). This method can be easily integrated with the K8s deployment and publishing procedures.
  1. Supports collecting K8s container logs by using the Sidecar mode. In this mode, every container of each node runs a collection client Logtail. This mode is suitable for large, hybrid, and PaaS clusters.

Query Analysis and Visualization

After collecting logs, you need to perform query analysis and visualization of these logs. We’ll take Tomcat container logs as an example to describe the powerful query, analysis, and visualization features provided by Log Service.

Saved Search

When container logs are collected, log identification information such as container name, container IP, and target file directory is attached to these logs. This allows you to quickly locate the target container and files based on this information when you run queries. For more information about the query feature, see Query syntax.

Real-Time Analysis

The Real-time analysis feature of Log Service is compatible with the SQL syntax and offers more than 200 aggregate functions. If you know how to write SQL statements, you will be able to easily write analytic statements that meet your business needs. For example:

* | SELECT request_uri, COUNT(*) as c GROUP by request_uri ORDER by c DESC LIMIT 10
* | SELECT diff[1] AS c1, diff[2] AS c2, round(diff[1] * 100.0 / diff[2] - 100.0, 2) AS c3 FROM (select compare( flow, 3600) AS diff from (select sum(body_bytes_sent) as flow from log))


To visualize data, you can use multiple built-in charts of Log Service to visually display SQL computation result and combine multiple charts into a dashboard.

Image for post
Image for post
Image for post
Image for post

Context Analysis of Logs

Features such as query analysis and dashboards can help us view the global information and understand the overall operation status of the system. However, to locate specific problems, we usually need context information from logs.

Definition of Context

Context refers to clues to a problem, such as the information before and after an error in a log. The context involves two elements:

  • Order assurance: With the same minimum differentiation granularity, information must be presented in a strict order, even tens of thousands of operations are performed every second.
Image for post
Image for post

Challenges of Context Query

On the background of centralized log storage, neither the log collection terminal nor the server can keep the original order of logs:

  1. On the server, due to the horizontally expanded multi-node load balancing architecture, logs on the same client machine are distributed to multiple storage nodes. It is difficult to restore the original sequence based on logs that are distributed to multiple storage nodes.


Log Service effectively solves these challenges by adding some additional information to each log record, and by using the keyword query capability of the server. The following figure shows how Log Service addresses these problems.

Image for post
Image for post
  1. Log collection clients of Log Service usually upload multiple logs at one time as a log package. The client writes a monotonically increasing package_id to each of these log packages, and each log record of a log package has a package-based offset.
  2. The server combines the source_id, package_id, and offset into a field, and creates an index for this field. This allows us to accurately locate a log entry based on the source_id, package_id, and offset, even when different types of logs are mixed and stored together on the server.

LiveTail — tail-f on the Cloud

Apart from viewing the log context, sometimes we also want to continuously monitor the container output.

Traditional Method

The following table shows how to monitor container logs in real time by using the traditional method.

Image for post
Image for post

Pain Points

Using traditional methods to monitor container logs has the following pain points:

  1. Different observation methods must be used to view different types of container logs, which increases the costs.
  2. The display of key information query results is not simple and intuitive enough.

New Feature and Mechanism

To address these problems, Log Service offers the LiveTail feature. Comparing with the traditional method, LiveTail has the following advantages:

  1. It allows you to use a unified method to view different types of container logs without diving into the target container.
  2. It supports keyword-based filtering.
  3. It supports setting key columns.
Image for post
Image for post


Original Source

Written by

Follow me to keep abreast with the latest technology news, industry insights, and developer trends.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store