Pause, Resume and Scale Kubernetes Deployments

By Alwyn Botha, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

Pausing, resuming and scaling Deployments are very easy to understand.

We have several commands to show the status of these 3 actions. Unfortunately we will be using the same commands repeatedly so that you can see over time which commands work best for which specific need.

This tutorial consists of 1–2 sentences of descriptions followed by a command to run. Then follows one short snippet of comments and one more command and so on. Each of those steps must be done for the tutorial to work.

Kubectl Rollout Pause

Kubernetes enable you to pause a Deployment. You can then make adjustments to the Deployment and resume it.

Deployments do not need to be paused to make a change. Use pause to pause a Deployment so that you can calmly make several changes ( that are kept in a queue till resume is ordered ).

We start with one normal, working Deployment :

Create the Deployment

Just 2 seconds later the Deployment is complete. ( busybox image already on server, starting such a simple container is very fast — 10 replicas started up simultaneously … adding to the fast speed. )

Check the rollout status:

Describe the Deployment :

The previous 3 commands showed a perfectly running 10 Pods Deployment.

Now we pause this Deployment and observe its status.

get deploy does not actually show this Deployment as paused.

kubectl rollout status does not actually show this Deployment as paused.

This is a disappointment … rollout status deployment does NOT show Deployment as being paused.

It shows ROLLOUT status, not DEPLOYMENT status.

Describe deployment :

ONLY one line shows that this Deployment is now paused:

IMPORTANT: ONLY use kubectl describe to reliably find paused Deployments.

Make Changes to Paused Deployment

While a Deployment is paused we can make changes to its Pods.

Change all our replicated Pods to use busybox:1.30-glibc image.

— record the fact that we are using a new image in the Deployment annotations.

This specific change is not applied immediately ( Deployment is currently paused ).

While a Deployment is paused we can make changes to its number of replicas.

Changes to number of replicas are implemented immediately.

Important: note the up-to-date column … none of the available / running Pods are up to date. They are all using the busybox image they used at the exact pause command time.

Show list of Pods — they still run old busybox image.

Kubectl Rollout Resume

We need to resume our rollout so that a new set of Pods can be created using the image we requested : busybox:1.30-glibc

3 seconds later: 3 new Pods at bottom are being created.

another 10 seconds later …

3 Pods in middle of list are already new.

Pod at the top of list is old. Very easy to determine old versus new: old Pods have older age.

( Freshly created new Pods have young age )

another 10 seconds later …

All Pods are new … short running age is the easy giveaway hint.

Display Deployment status … all 5 Pods up to date.

Note the last column. Overall age of this Deployment is 7 minutes. Only last 30 seconds used the new busybox image.

Show the rollout history of the Deployment :

We see our new busybox image listed as revision 2.

Scaling Paused Deployments

Deployments can be scaled while they are running.

Scaling in this context means changing the number of running identical Pod replicas.

Below is a demo that you can freely scale a paused Deployment :

Current state for reference:

We have 5 running Pods.

Pause Deployment :

Get list of Pods:

IMPORTANT: Pausing a Deployment DOES NOT pause its containers.

All Pods still running.

Change replicas to 2:

Investigate change … only 2 Pods now running:

Change replicas to 6:

Investigate change … 6 Pods now running:

Four new Pods have an age of only 4 seconds.

Let’s change the Pods to all use another different busybox image: busybox:1.30-uclibc

Important: right now the Deployment is paused. It only takes note of the newly desired image. It does not roll out the change immediately — it is waiting for the kubectl rollout resume command.

Show list of Pods. They are unaware a change is coming. Still running as before.

The Deployment status shows ZERO up to date Pods. It is aware a change is pending while Deployment is in a paused state.

Rollout history does not show new busybox:1.30-uclibc image pending.

It only shows history of previous and running Deployments status.

Let’s resume the Deployment :

5 seconds later … 3 new Pods being created.

another 5 seconds later:

  • 3 new Pods running
  • 3 new Pods being created
  • 2 old Pods running ( at bottom ) waiting for termination

A few seconds later shows all Pods new ( young AGE )

All Pods up to date with desired new busybox image:

History now shows new busybox image as revision 3.

Describe Deployment :

Note the annotation that shows which image is being used.

Delete Pod.

Kubectl Rollout Undo

Deployments allow you to roll back to any previous revision.

By default it keeps a list of your last 10 revisions.

Let’s create our normal test set of 10 Pod replicas :

( First few steps should be boring familiar by now )

Create the Deployment

Show history:

Show the Pods:

Show version of busybox running … to compare against later.

Remember that … BusyBox v1.30.0

Change all Pods to use busybox:1.29.3

Investigate Pods during rollout process

Seconds later … complete set of 10 fresh new Pods with young age.

Check to make sure the Pods are using version 29.3 busybox:

Show history. New busybox is revision 2.

Describe shows annotation as expected.

NOW we undo this latest Deployment.

We undo deployment to revision 1 via flag : — to-revision=1

Follow state of undo rollout

8 Pods up to date in a few seconds.

Still only 8 Pods up to date.

Seconds later … all 10 Pods back to the previous version busybox.

Check Pods are using busybox version 1.30 … the number we had to remember.

Success. kubectl rollout undo deployment correctly rolled back to revision 1.

Delete Deployment

Deployment Internals

Let’s create a Deployment and explore its internals.

Create the Deployment

List Deployment status :

A Deployment actually manages a ReplicaSet that runs under its command.

rs is ReplicaSet abbreviation for command line use.

ReplicaSet name starts with name of Deployment. Kubernetes adds a 10 digit random number.

These Pods run under control of the ReplicaSet. Kubernetes adds 5 digit random number to individual Pod names.

Actual old and new Pods were similar to :

busybox-deployment-568495f8b6-rv8sw
and
busybox-deployment-8659fbc5bf-6v8qt

I changed it to 1111…. and 3333… so it is easy to differentiate in lists below.

  • 1111… old, Pods created first
  • 3333… new, current fresh Pods

Let’s scale to 7 replicas:

  • Deployment gets command to scale to 7 replicas
  • Deployment sends this command to the ReplicaSet
  • ReplicaSet receive command, counts 5 replicas, need 7 so create 2 more Pods.

Deployment now shows 7 running replicas.

Let’s use this busybox image: busybox:1.30-uclibc

  • Deployment gets command : kubectl set image …
  • Deployment sends this command to the ReplicaSet
  • ReplicaSet receive command, determines all Pods are running wrong version of busybox
  • A new set of Pods in the series busybox-deployment-3333333333 gets created
  • Some old busybox-deployment-1111111111 Pods are kept running during rolling update

maxUnavailable determines how many old Pods may be unavailable during update

maxSurge determines how many NEW Pods may be created in addition to already running Pods

Investigate status of rolling update:

1111… series keep on doing useful work while update is in progress
3333… series … some new Pods already running
3333… series … some new Pods in process of being created: ContainerCreating

A few seconds later and the rolling update is complete. All old 111… Pods deleted.

7 new Pods up-to-date and available.

List of ReplicaSets.

Note the old ReplicaSet is kept in case we need to roll back a Deployment.

From https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#revision-history-limit

All old ReplicaSets will be kept by default, consuming resources in etcd and crowding the output of kubectl get rs, if this field is not set.

etcd is the ‘database’ / key-value data store Kubernetes use to keep track of all its API objects.

etcd keeps at least 3 types of information about all Kubernetes API objects:

  • meta data ( data about data ) https://en.wikipedia.org/wiki/Metadata
  • Kubernetes uses metadata to keep track of objects and to note object interrelationships ( see below )
  • our desired spec … details of the spec we need for our objects
  • the actual status of this specific object

This can be clearly and neatly seen when we run :

Kubernetes Deployment course complete … delete Deployment :

Deleting a Deployment deletes the Deployment itself, underlying ReplicaSets and all its Pods.

Your Turn

Experiment with all the Deployment settings: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#writing-a-deployment-spec

Determine the correct settings for all your production Pods and their controlling Deployments.

Original Source

https://www.alibabacloud.com/blog/pause-resume-and-scale-kubernetes-deployments_595019?spm=a2c41.13112163.0.0

Written by

Follow me to keep abreast with the latest technology news, industry insights, and developer trends.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store