Getting Started with Kubernetes | Application Storage and Persistent Volumes: Volume Snapshots and Topology-aware Scheduling

1) Basics

Background of Volume Snapshots

The storage service must create snapshots for online data and restore data quickly to improve the fault tolerance of data operations in volumes. Snapshots also help to quickly complete data replication and migration operations, such as environment replication and data development. Kubernetes provides a CSI Snapshot controller to create volume snapshots.

Volume Snapshot User Interface — Snapshot

Kubernetes uses a system of persistent volume claim (PVC) and persistent volume (PV) to simplify storage operations. The volume snapshot design follows the PVC and PV system design. To create a volume snapshot, create a VolumeSnapshot object and specify the VolumeSnapshotClass object. Then, the relevant component in a cluster dynamically creates the volume snapshot and the corresponding VolumeSnapshotContent object. As shown in the following figure, the VolumeSnapshotContent object is dynamically created following a process similar to the dynamic PV provisioning process.

Volume Snapshot User Interface — Restore

How to quickly restore data from a volume snapshot? The following figure provides the answer.

Topology — Definition

“Topology” in this article refers to the locations of nodes in a Kubernetes cluster. Specify the topology where a node is located in its labels field.

  • Zone: In Kubernetes, zones are identified by the label of This label identifies which zones wherein the nodes in an inter-zone Kubernetes cluster reside.
  • Hostname: It specifies a single node. In Kubernetes, hosts are identified by the label of This label will be described in detail in the section about local PVs at the end of the article.

Background of Volume Topology-aware Scheduling

As mentioned in the previous session, Kubernetes uses a PV and PVC system to separate storage resources from computing resources. If a PV restricts the data query locations, set the nodeAffinity field to specify which nodes query data in this PV.

Volume Topology-aware Scheduling

To conclude, the preceding problems occur because the pod that needs to use a PV is scheduled to a different node after the PV controller binds the PVC to this PV or dynamically creates the PV. However, a PV imposes restrictions on the nodes where the pod is located. Take a local PV as an example. A pod uses this PV only if it is scheduled to the specified node. In scenarios involving inter-zone data querying, the pod that needs to use the PV queries data from an Alibaba Cloud disk only if the disk is scheduled to a node in the same zone as the PV. Therefore, how does Kubernetes solve both problems?

  • In a dynamic PV provisioning scenario, the PV is created after the pod is scheduled to a node. Therefore, the PV is created based on the topology information recorded on the node. This ensures that the new PV is created in the same topology as the node where the pod is running. In the preceding example of Alibaba Cloud disk, the pod runs in zone 1, and therefore the PV will be created in zone 1.
  • The second component is the dynamic PV provisioner. This component needs to create PVs dynamically based on the topology information of pods.
  • The third component is the Kubernetes scheduler, which undergoes the most important modification. The scheduler selects the node for a pod on the basis of the pod’s requirements for CPU and memory capacities for computing, as well as its storage requirements. It implies that the scheduler checks whether a node matches the nodeAffinity attribute of the PV to which the PVC is bound. Alternatively, the scheduler checks during dynamic PV provisioning whether the current node meets the topological restriction specified in the StorageClass object. In this way, the scheduler ensures that the final node meets the topological restriction of the volume.

2) Use Cases

Next, let’s demonstrate the basics introduced in section 1 by using several YAML samples.

Volume Snapshot/Restore Sample

Example of a Local PV

The following figure shows a YAML sample of a local PV.

Example of Topological Restriction in a Dynamic PV Provisioning Scenario

The following example shows how the topological restriction is implemented in a dynamic PV provisioning scenario.

3) Demonstration

This section demonstrates the preceding operations in a live environment.

4) Processes

Volume Snapshot/Restore Process in Kubernetes

Let’s take a look at the volume snapshot and topology-aware scheduling process in Kubernetes.

  • The other part is the CSI plug-ins that cloud storage vendors implement by using their own APIs. The CSI plug-ins are also called storage drivers.

Volume Topology-aware Scheduling Process in Kubernetes

The following describes the volume topology-aware scheduling process:

  • The nodes that match the resource requirements are then be used by the pod.
  • The second stage is priorities, during which the scheduler scores the qualified nodes to filter out the optimal one.
  • Then, the scheduler adds the scheduling result to the nodeName field under the spec section of the pod. The scheduling result is detected by the kubelet on the chosen node, and then the pod creation process starts.
  • For a PVC that has been bound, the scheduler checks whether the nodeAffinity attribute of the bound PV matches the topology of the current node. If no, the pod cannot be scheduled for this node. If yes, the scheduler checks the PVCs whose binding operation is delayed.
  • For a PVC whose binding operation is delayed, the scheduler queries existing PVs in the cluster and selects those that meet the requirements of the PVC. Then, the scheduler matches these PVs with the topology specified in the labels field of the current node. If none of these PVs matches the topology, the scheduler checks whether the topology of the current node meets the topological restriction specified in the StorageClass object for dynamic PV provisioning. If yes, this node is qualified. If no, the pod cannot be scheduled for this node.


This article explains the Kubernetes resource objects for the volume snapshot feature and how to use these objects in the PVC and PV system. It demonstrates two example problems that occur in actual business scenarios to explain the necessity of volume topology-aware scheduling, and how Kubernetes uses this feature to solve these problems. It further describes the mechanisms of volume snapshot and topology-aware scheduling in Kubernetes in detail to help users get in-depth insights into the implementation of the feature.

Original Source:



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store