Kubernetes Application Management — Upgrade (1)

By Bruce Wu


In software development, the small update and fast iteration development model has been widely recognized and used by more and more Internet companies. The frequency of application changes and upgrades has been continuously increasing. To address different upgrade requirements and ensure the smooth upgrade of diversified applications, a series of deployment and release models have been developed.

In the context that more and more applications are container-based, smoothly upgrading container-based applications has attracted extensive attention. This article describes the upgrade methods for different Kubernetes (K8s) deployment methods, and focuses on describing how to implement the rolling release of applications in Deployment. The study of this article is made based on k8s 1.13.

K8s Application Upgrade

In K8s, pod is the basic unit of application deployment and upgrade. Generally, a pod represents an application instance, and is deployed and run in the forms of Deployment, StatefulSet, DaemonSet, and Job. Next, this article describes how to upgrade pods under different deployment models.


In most cases, Deployment is the most common deployment form of pods. This article describes the Deployment method by using a Spring Boot-based Java application as an example. This application is a simplified version of a real application and is very representative. It has the following characteristics:

1. After the application is started, it takes some time to load the configuration. During this period, it cannot provide external services.

2. An application may not always be able to normally provide services after it is started.

3. The application may not be able to automatically exit when it is unable to provide services.

4. During the upgrade process, the old application instance must not receive new requests, but it must have sufficient time to process existing requests.


To ensure zero-downtime and uninterrupted upgrade of applications with the preceding characteristics, you need to carefully set the relevant parameters of Deployment. The upgrade-related configuration used in this example is provided as follows. For more information about the complete configuration, see spring-boot-probes-v1.yaml.

kind: Deployment
replicas: 8
type: RollingUpdate
maxSurge: 3
maxUnavailable: 2
minReadySeconds: 120
- name: spring-boot-probes
image: registry.cn-hangzhou.aliyuncs.com/log-service/spring-boot-probes:1.0.0
- containerPort: 8080
terminationGracePeriodSeconds: 60
path: /actuator/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
failureThreshold: 1
path: /actuator/health
port: 8080
initialDelaySeconds: 40
periodSeconds: 20
successThreshold: 1
failureThreshold: 3

Configuring Strategies

You can configure the pod replacement strategies by setting various strategy parameters, such as:

You can meet the upgrade requirements in different scenarios by adjusting the values of maxSurge and maxUnavailable.

1. To upgrade the application as quickly as possible while ensuring high system availability and stability, set maxUnavailable to 0, and specify a larger value for maxSurge.

2. To speed up the upgrade in the case of insufficient system resources and low pod load, set maxSurge to 0 and specify a larger value for maxUnavailable. Note that, if the value of maxSurge is 0, and that of maxUnavailable is DESIRED, the entire service may become unavailable. In this case, RollingUpdate becomes downtime release.

In this example, maxSurge is set to 3 and maxUnavailable is set to 2 to balance the stability, resource consumption, and upgrade speed.

Configuring Probes

K8s provides the following two types of probes:

The probe configuration is very flexible. You can specify the detection frequency, success threshold, and failure threshold of a probe. For more information about parameter description and configuration methods, see Configure Liveness and Readiness Probes.

In this example, the target containers are configured with readiness probes and liveness probes:

1. Set initialDelaySeconds of ReadinessProbe to 30, because an application takes 30 seconds to complete the initialization in average.

2. When you configure liveness probes, ensure the target containers have sufficient time to get ready. If the values of the initialDelaySeconds, periodSeconds, and failureThreshold parameters are too small, the containers may be restarted before they get ready. In this case, they can never get ready. The example configuration ensures that a container will not be restarted if it gets ready within 80s after it is started. This provides a sufficient buffer in addition to the average initialization time of 30s.

3. For readiness probes, set periodSeconds to 10 and failureThreshold to 1. No traffic will be sent to a container if it remains abnormal for 10 seconds.

4. For liveness probes, set periodSeconds to 20 and failureThreshold to 3. When a container throws an exception, it will not be restarted within 60s.

Configuring minReadySeconds

Generally, after a newly created pod becomes ready, K8s considers it available and deletes the old pod. However, some problems can only be observed when the new pod processes user requests. To be safe, do not delete the old pod until you verify that no problem exists after observing the newly ready pod for a while.

The minReadySeconds parameter can control the length of the period that a pod must be observed after it gets ready. If containers within a new pod run properly during this period, K8s considers this pod available and deletes the old pod. Take caution when you configure this parameter. A smaller value may cause the observation to be insufficient, but a higher value may slow down the upgrade process. In this example, minReadySeconds is set to 120s to ensure a ready pod can complete a liveness detection cycle.

Configuring terminationGracePeriodSeconds

When K8s is about to delete a pod, K8s sends a TERM signal to containers inside the pod, and remove this pod from the endpoint list of the service. If the containers cannot be terminated within the specified period of time (30 seconds by default), K8s sends a SIGKILL signal to forcibly terminate the process. For more information about the detailed procedure, see Termination of Pods.

Generally, an application takes no more than 40s to process a request. To ensure the application can complete processing requests that have already been sent to the server, a graceful shutdown time is set in this example. You can determine the value of the terminationGracePeriodSeconds parameter based on the actual situation of different applications.

Viewing the Upgrade Progress

The preceding configurations can ensure smooth upgrade of the target application. We can change any field of PodTemplateSpec in the Deployment object to trigger a pod upgrade, and run command kubectl get rs -w to view the upgrade progress. We can see changes in the number of old and new pod replicas as follows:

1. Create new pods, the number of which is equal to maxSurge. Now, the total number of pods reaches the upper limit: DESIRED + maxSurge.

2. Before new pods are ready or available, K8s immediately starts the process to old pods, the number of which is equal to maxUnavailable. Now, the number of available pods is equal to DESIRED — maxUnavailable.

3. When an old pod is completely deleted, a new pod is immediately created.

4. When a new pod passes the readiness detection and gets ready, K8s sends traffic to this pod. However, this pod is not considered available until the specified observation period ends.

5. If a ready pod runs properly during the observation period and is considered available, another old pod is deleted.

6. Repeat steps 3, 4, and 5 until all old pods are deleted and the number of new available pods reaches the target number of replicas.

Rolling Back Upgrades

Application upgrades cannot be successful every time. You may find the new version unable to meet your expectations during or after an upgrade. In this case, you may want to roll back to a previous version that is more stable. K8s records every change of PodTemplateSpec (such as changes in template labels or container images). This allows you to conveniently roll back to a stable version based on the version number if you find any problems with the new version. For more information about the detailed procedure, see Rolling Back a Deployment.


StatefulSet is a common deployment method for stateful pods. For these pods, K8s also provides many parameters to flexibly control their upgrade process. Most of these parameters are the same as those that are used for upgrading pods running in Deployment. This article mainly describes their differences.

Strategy Types

In K8s 1.7 and later versions, StatefulSet supports two policies: OnDelete and RollingUpdate.


You can set parameter .spec.updateStrategy.rollingUpdate.partition to upgrade only some pods. After you set the partition parameter, only pods with an ordinal number greater than or equal to the partition parameter will undergo the rolling upgrade. The rest of the pods remain unchanged.

You can also implement the canary upgrade by continuously decreasing the value of the partition parameter. For more information about the detailed operation steps, see Rolling Out a Canary.


DaemonSet allows you to run replicas of one pod at some or all K8s work nodes. DaemonSet is usually used to run monitoring or log collection applications. For pods in the DaemonSet, parameters that are used to control their upgrade process are basically the same as those of Deployment. Slight differences exist in the supported strategy types, though. DaemonSet supports the following two strategy types:

For more information about the detailed procedure, see Perform a Rolling Update on a DaemonSet.


Generally, Deployment, StatefulSet, and DaemonSet are used to deploy regular processes. However, pods in Job exit after executing specified jobs, and do not involve rolling update. After you change PodTemplateSpec of a Job, you need to delete the old Job and pods, and run this job again by using the new configuration.


K8s supports zero-downtime and uninterrupted upgrade of most applications, but it also has some problems:

1. Currently, K8s supports only two deployment upgrade strategies: downtime release and rolling release. For applications that have additional requirements, such as blue-green release, canary release, and A/B testing, secondary development or third-party tools are required.

2. K8s offers the rollback feature, but you must perform the rollback manually, and it does not support automatic rollback based on specified conditions.

3. Some applications also need to gradually scale up or down in batches, which is currently not supported by K8s.

For how to solve these problems, see the next article of this series.



Follow me to keep abreast with the latest technology news, industry insights, and developer trends.