Kubernetes Demystified: Using LXCFS to Improve Container Resource Visibility

This series of articles explores some of the common problems enterprise customers encounter when using Kubernetes. This second article in the series addresses the problem of legacy applications that cannot identify container resource restrictions in Docker and Kubernetes environments.

Linux uses cgroups to implement container resource restrictions, but the host’s procfs /proc directory is still mounted by default in the container. This directory includes meminfo, cpuinfo, stat, uptime, and other resource information. Some monitoring tools, such as "free" and "top", and legacy applications still acquire resource configuration and usage information from these files. When they run in a container, they read the host's resource status, which leads to errors and inconveniences.

What Is LXCFS?

A common solution proposed in the community is to use LXCFS to provide resource visibility in the container. LXCFS is an open source Filesystem in Userspace (FUSE) designed to support LXC containers. It can also support Docker containers.

LXCFS uses a FUSE file system to provide the following procfs files in the container:

Schematic of LXCFS:

For example, LXCFS mounts the host’s /var/lib/lxcfs/proc/memoinfo file to the Docker container after the /proc/meminfo location. Then, when processes in the container read the relevant file content, the LXCFS FUSE implementation reads the correct memory restriction from the container's cgroup. In this manner, applications obtain the correct resource constraint settings.

Using LXCFS in Docker Environments

Here, we use CentOS 7.4 as the testing environment and have already enabled support for the FUSE module. Because Docker for Mac, Minikube, and other development environments adopt highly-tailored operating systems, they cannot support FUSE, and run LXCFS to perform testing.

Install the LXCFS RPM package



We can see that the total memory is 256 MB, so the configuration has taken effect.

Using LXCFS in Kubernetes

Some users have asked how they can use LXCFS in a Kubernetes cluster environment. To answer their question, we will provide an example to be used for reference.

First, we must install and start LXCFS on the cluster nodes. Here, we use the Kubernetes method, which makes use of containers and DaemonSet to run the LXCFS FUSE file system.

All the sample code used in this article can be obtained from the following GitHub address:

The manifest file is as follows:

NOTE: Because the LXCFS FUSE must share the system PID namespace and requires a privilege mode, we have configured the relevant container startup parameters.

Using the following command, we can automatically install and deploy LXCFS on all cluster nodes. Easy, right?:-)

So how do we use LXCFS in Kubernetes? Just as in the previous article, we can add volume and volumeMounts definitions for the files under /proc in the pod definition. However, as this makes the K8S application deployment file more complicated, is there a way to have the system mount the relevant files automatically?

Kubernetes provides an Initializer extension that can be used for interception and injection during resource creation. This gives us an elegant method by which to automatically mount LXCFS files.

Note: Alibaba Cloud Kubernetes clusters provide Initializer support by default. For testing on self-built clusters, see the instructions for enabling the function in this document.

The manifest file is as follows:

Note: This describes the deployment of a typical Initializer. First, we create the service account lxcfs-initializer-service-account and then authorize it to query and modify "deployments" resources. Then, we deploy an Initializer named "lxcfs-initializer" and use the preceding SA to start a container to create "deployments" resources. If "deployments" contains an annotation that sets initializer.kubernetes.io/lxcfs to true, the files are mounted to the container in this application.

We can run the following command to use the Initializer after deployment is complete:

Next, we will deploy a simple Apache application and allocate 256 MB of memory to it. In addition, we declare the following annotation: "initializer.kubernetes.io/lxcfs": "true".

The manifest file is as follows:

We can use the following method to deploy and test the application:

We can see that the total memory returned by the free command is the container resource capacity we set.

We can check the pod configuration to see that the procfs files have been mounted correctly.

In Kubernetes, we can also use Preset to implement a similar function. However, we will not describe this here due to limited space.


This article showed how to use LXCFS to provide container resource visibility, allowing legacy systems to better identify resource restrictions when running in containers.

In this article, we also showed how to the use the container and DaemonSet method to deploy LXCFS FUSE. This approach not only greatly simplifies deployment, but allows facilitates the use of Kubernetes’ own container management capabilities, supports the automatic recovery of failed LXCFS processes, and ensures consistent node deployment when the cluster is scaled. This technique also works for other similar monitors and system extensions.

In addition, we showed how to use the Kubernetes Initializer extension to automatically mount LXCFS files. This entire process is transparent to application deployment staff, greatly simplifying O&M tasks. Using similar methods, we can flexibly tailor application deployment activities to meet special business needs.

Alibaba Cloud Kubernetes Service is the first such service with certified Kubernetes consistency. It simplifies Kubernetes cluster lifecycle management and provides built-in integration for Alibaba Cloud products. In addition, the service further optimizes the Kubernetes developer experience, allowing users to focus on the value of cloud applications and further innovations.


Follow me to keep abreast with the latest technology news, industry insights, and developer trends.