Pod Lifecycle, Container Lifecycle, Hooks and restartPolicy

Alibaba Cloud
14 min readApr 29, 2019

--

By Alwyn Botha, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

This tutorial contains around 15 different Pods for you to explore a Pod’s lifecycle and corresponding status codes.

Simple Pod — Sleep 6 Seconds

nano myLifecyclePod-1.yaml

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The Pod is running && sleep 6']

Create the Pod.

kubectl create -f myLifecyclePod-1.yaml 

pod/myapp-pod created

Let’s investigate the Pod status over time:

kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 2s
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 4s
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 7s
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 0 10s

Our Pod stays in running status for 6 seconds. Then the status turns to Completed.

Note the first 3 lines state ready: 1/1. This means that 1 container out of 1 container in the Pod is ready: running and able to be attached to, ssh’ed into, and so on.

The last line states: ready 0/1 … Pod no longer ready ( for interactive use ) … it is completed.

Output of kubectl describe myapp-pod with ONLY important status fields shown:

Status:             Succeeded
myapp-container:
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 08 Jan 2019 09:04:28 +0200
Finished: Tue, 08 Jan 2019 09:04:34 +0200
Ready: False
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True

Easily understandable:

  • Status: Succeeded … final overall status for the Pod
  • State: Terminated … lower level detail state detail
  • Reason: Completed … Pod is terminated since it COMPLETED — that is the reason
  • Exit Code: 0 … final overall success exit code for the Pod
  • Started: 09:04:28 and Finished: 09:04:34 : Pod sleeps for 6 seconds
  • Ready: False … Pod no longer ready … it is terminated
  • Restart Count: 0 … no errors were found … no restarts ever done

Conditions:

  • Initialized True … all init containers have started successfully. There were none in our case.
  • Ready False … the Pod is able to serve requests. FALSE right now since it is terminated.
  • ContainersReady False … all containers in the Pod are ready … only 1 container in our case
  • PodScheduled True … the Pod has been scheduled to a node ( node : a server running Kubernetes )

Demo complete, delete our Pod:

kubectl delete -f myLifecyclePod-1.yaml --force --grace-period=0warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "myapp-pod" force deleted

Simple Pod — Exit 1 ( Error ) restartPolicy: Never

nano myLifecyclePod-2.yamlapiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo The Pod is running && exit 1']
restartPolicy: Never

Create the Pod.

kubectl create -f myLifecyclePod-2.yaml 

pod/myapp-pod created

Output of kubectl describe myapp-pod with ONLY important status fields shown:

Status:             Failed
myapp-container:
State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 08 Jan 2019 09:14:20 +0200
Finished: Tue, 08 Jan 2019 09:14:20 +0200
Ready: False
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
  • status failed
  • because of reason error caused by error exit code 1
  • start … finish: zero seconds runtime. Pod exited with error immediately.

Conditions as before. The really useful status information is in the higher listed fields just described.

Demo complete, delete our Pod:

kubectl delete -f myLifecyclePod-2.yaml --force --grace-period=0warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "myapp-pod" force deleted

Simple Pod — Peek into Early Status

Re-create the Pod from test number 1.

Immediately after that we peek into its status using * kubectl describe myapp-pod

kubectl create -f myLifecyclePod-1.yaml 

pod/myapp-pod created

Output of kubectl describe myapp-pod with ONLY important status fields shown:

AFTER ONE SECOND:

Status:             Running
myapp-container:
State: Running
Started: Tue, 08 Jan 2019 09:17:51 +0200
Ready: True
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True

Note that while Pod is Running Ready True and ContainersReady True

This is the normal healthy Pod condition.

AFTER TEN SECONDS:

Status:             Succeeded
myapp-container:
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 08 Jan 2019 09:17:51 +0200
Finished: Tue, 08 Jan 2019 09:17:57 +0200
Ready: False
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True

Succeeded with exit code 0.

Compare this Succeeded with the Failed we got earlier:

Status:             Failed
myapp-container:
State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 08 Jan 2019 09:14:20 +0200
Finished: Tue, 08 Jan 2019 09:14:20 +0200
Ready: False
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True

As you can see the first 5 lines of status provide ALL the information you need:

Conditions: its 4 content fields are identical for Succeeded and Failed Pods … it is useless when used alone for status checks.

Demo complete, delete our Pod:

kubectl delete -f myLifecyclePod-1.yaml --force --grace-period=0pod "myapp-pod" force deleted

Simple Pod — Exit 1 ( Error ) restartPolicy: Always

Our error Pod in test 2 had restartPolicy: Never

Once it crashed it stayed crashed.

Let’s investigate restartPolicy: Always on a crashing Pod:

nano myLifecyclePod-3.yaml 

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo The Pod is running && exit 1']
restartPolicy: Always

Note last line in the Pod YAML file: restartPolicy: Always

Create the Pod.

kubectl create -f myLifecyclePod-3.yaml 

pod/myapp-pod created

Let’s investigate the Pod status repeatedly over time using kubectl get po :

NAME        READY   STATUS   RESTARTS   AGE
myapp-pod 0/1 Error 1 4s
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 CrashLoopBackOff 1 9s
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 CrashLoopBackOff 1 15s
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Error 2 19s
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Error 2 28s
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 CrashLoopBackOff 2 36s
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Error 3 51s

restartPolicy: Always repeatedly restarts crashing pod ( exit code 1 )

RESTARTS field grows larger over time.

There is a CrashLoopBackOff and Error status based on the exact point in time we checked the status.

Output of kubectl describe myapp-pod with ONLY important status fields shown:

Two different states shown:

REASON: CRASHLOOPBACKOFF

Status:             Running
myapp-container:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 08 Jan 2019 09:34:51 +0200
Finished: Tue, 08 Jan 2019 09:34:51 +0200
Ready: False
Restart Count: 3
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True

REASON: ERROR

Status:             Running
myapp-container:
State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 08 Jan 2019 09:35:32 +0200
Finished: Tue, 08 Jan 2019 09:35:32 +0200
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 08 Jan 2019 09:34:51 +0200
Finished: Tue, 08 Jan 2019 09:34:51 +0200
Ready: False
Restart Count: 4
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True

Once again you can see the first 5 lines of status provide ALL the information you need:

Conditions: its 4 content fields are identical again — not useful.

You saw Running, Succeeded and Failed status. There are two more:

UNKNOWN:

Shown when Kubernetes could not determine status of Pod. This is mostly because Kubernetes could not communicate with the node the Pod is running on.

PENDING:

Mostly caused by time during which images are downloaded over the Internet.

Demo complete, delete our Pod:

kubectl delete -f myLifecyclePod-3.yaml --force --grace-period=0pod "myapp-pod" force deleted

This concludes our basic coverage of Pod status. Next we investigate Pod status using different restart policies.

restartPolicy: Always : Pod Sleep 1 Second

From https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy

Restart policy

A PodSpec has a restartPolicy field with possible values Always, OnFailure, and Never.

The default value is Always. restartPolicy applies to all Containers in the Pod

Our first test case:

  • restartPolicy: Always
  • Pod sleeps 1 second

Seems benign enough — we expect a perfectly working Pod.

nano myLifecyclePod-4.yaml

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo The Pod is running && sleep 1']
restartPolicy: Always

Create the Pod.

kubectl create -f myLifecyclePod-4.yaml 

pod/myapp-pod created

Let’s investigate the Pod status over time:

kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 0 3s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 1 6s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 CrashLoopBackOff 1 9s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 CrashLoopBackOff 1 13s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 CrashLoopBackOff 1 15s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 2 20s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 2 23s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 2 28s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 CrashLoopBackOff 2 35s

Not at all what we expected: continuously restarts and CrashLoopBackOff.

The reason is that Kubernetes assumes our Pod is crashing since it only runs a second.

The Pod exit code = 0 success, but this short runtime confuses Kubernetes.

Let’s delete this Pod and see if we can rectify this.

kubectl delete -f myLifecyclePod-4.yaml --force --grace-period=0pod "myapp-pod" force deleted

restartPolicy: OnFailure : Pod sleep 1 Second

nano myLifecyclePod-5.yaml

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo The Pod is running && sleep 1']
restartPolicy: OnFailure

Note restartPolicy: OnFailure at the end of the spec.

Create the Pod.

kubectl create -f myLifecyclePod-5.yaml

pod/myapp-pod created

Let’s investigate the Pod status over time:

kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 3s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 0 6s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 0 10s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 0 16s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 0 21s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 0 30s

Success. Pod runs for a second, exits successfully and stays in Completed state permanently.

restartPolicy: OnFailure is better to use than restartPolicy: Always in most cases.

Demo complete, delete our Pod:

kubectl delete -f myLifecyclePod-5.yaml --force --grace-period=0pod "myapp-pod" force deleted

restartPolicy: Never : Pod Sleep 1 Second

nano myLifecyclePod-6.yaml

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo The Pod is running && sleep 1']
restartPolicy: Never

Note : restartPolicy: Never

Create the Pod.

kubectl create -f myLifecyclePod-6.yaml

pod/myapp-pod created

Let’s investigate the Pod status over time:

kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 0 3s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 0 9s
kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Completed 0 18s

Pod completed successfully after 1 second — no restart needed. Nothing to show here.

Demo complete, delete our Pod:

kubectl delete -f myLifecyclePod-6.yaml --force --grace-period=0pod "myapp-pod" force deleted

restartPolicy: Never : Pod Exits with Error

nano myLifecyclePod-7.yaml

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo The Pod is running && exit 1']
restartPolicy: Never

Note the error exit 1 on our command.

Create the Pod.

kubectl create -f myLifecyclePod-7.yaml

pod/myapp-pod created

Let’s investigate the Pod status over time:

kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Error 0 8s

Pod fails immediately and stays that way since restartPolicy: Never

Status:             Failed
myapp-container:
State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 08 Jan 2019 10:00:34 +0200
Finished: Tue, 08 Jan 2019 10:00:34 +0200
Ready: False
Restart Count: 0

Demo complete, delete our Pod:

kubectl delete -f myLifecyclePod-7.yaml --force --grace-period=0pod "myapp-pod" force deleted

Container Lifecycle Hooks

Container Lifecycle Hooks are easy to understand but does not seem to work.

However you will see below that proving it works exposed several Kubernetes bugs, specifically when those hooks are involved.

We will be using the same mypostStartPod.yaml ( with various mods ) throughout this exercise.

Simplest Case: Working Pod

nano mypostStartPod.yamlapiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container

image: busybox
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Pod is running && sleep 5']

restartPolicy: Never

Create the Pod.

kubectl create -f mypostStartPod.yaml

pod/myapp-pod created

Output of kubectl describe myapp-pod with ONLY important status fields shown:

Status:             Succeeded
myapp-container:
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 08 Jan 2019 11:09:21 +0200
Finished: Tue, 08 Jan 2019 11:09:26 +0200
Ready: False
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True

As seen before; all OK.

This is our reference case: everything works as expected.

kubectl delete -f mypostStartPod.yaml --force --grace-period=0pod "myapp-pod" force deleted

postStart Sleep 10 Seconds

postStart executes a command immediately before container starts running.

nano mypostStartPod.yamlapiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container

image: busybox
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Pod is running && sleep 5']

lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "sleep 10"]

restartPolicy: Never

Create the Pod.

kubectl create -f mypostStartPod.yaml

pod/myapp-pod created

Output of kubectl describe myapp-pod with ONLY important status fields shown:

Status:             Succeeded
myapp-container:
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 08 Jan 2019 11:12:22 +0200
Finished: Tue, 08 Jan 2019 11:12:27 +0200
Ready: False
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Warning FailedPostStartHook 2s kubelet, minikube Exec lifecycle hook ([/bin/sh -c sleep 10]) for Container "myapp-container" in Pod "myapp-pod_default(812c05ce-1325-11e9-91d6-0800270102d2)" failed - error: command '/bin/sh -c sleep 10' exited with 137: , message: ""

Syntax error on simple: sleep 10 command.

postStart cannot even handle simple: sleep 10.

kubectl delete -f mypostStartPod.yaml --force --grace-period=0pod "myapp-pod" force deleted

postStart Echo to Termination-log

Let’s continue our investigations.

Send some text to /dev/termination-log.

nano mypostStartPod.yamlapiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container

image: busybox
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Pod is running && sleep 5']

lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo In postStart > /dev/termination-log"]

restartPolicy: Never

Create the Pod.

kubectl create -f mypostStartPod.yaml

pod/myapp-pod created

Output of kubectl describe myapp-pod

Status:             Succeeded
myapp-container:
State: Terminated
Reason: Completed

Message: In postStart
Exit Code: 0
Started: Tue, 08 Jan 2019 11:17:25 +0200
Finished: Tue, 08 Jan 2019 11:17:30 +0200
Ready: False
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True

AMAZING.

Note the Message: In postStart in output. postStart work perfectly when all it has to do is echo some text to /dev/termination-log .

But previous test using postStart ( sleep 10 ) gave error exited with 137 during our test 2 above.

Demo complete, delete our Pod:

kubectl delete -f mypostStartPod.yaml --force --grace-period=0pod "myapp-pod" force deleted

preStop Echo to /dev/termination-log

preStop executes a command just before Pod finally stops.

nano mypostStartPod.yamlapiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container

image: busybox
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Pod is running && sleep 5']

lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "echo In preStop > /dev/termination-log"]

restartPolicy: Never

We attempt to send text to the termination-log.

Create the Pod.

kubectl create -f mypostStartPod.yaml

pod/myapp-pod created

Output of kubectl describe myapp-pod

Status:             Succeeded
myapp-container:
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 08 Jan 2019 11:24:10 +0200
Finished: Tue, 08 Jan 2019 11:24:15 +0200
Ready: False
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True

Seems as if preStop echo did not work: there is no /dev/termination-log output shown.

kubectl delete -f mypostStartPod.yaml --force --grace-period=0pod "myapp-pod" force deleted

preStop Sleeps 10 Seconds

nano mypostStartPod.yamlapiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container

image: busybox
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Pod is running && sleep 5']

lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 10"]

restartPolicy: Never

Create the Pod.

kubectl create -f mypostStartPod.yaml

pod/myapp-pod created

Truncated output of kubectl describe myapp-pod

Status:             Succeeded
myapp-container:
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 08 Jan 2019 11:27:30 +0200
Finished: Tue, 08 Jan 2019 11:27:35 +0200
Ready: False
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True

Started to Finished is 5 seconds — no additional 10 seconds for preStop visible.

kubectl delete -f mypostStartPod.yaml --force --grace-period=0pod "myapp-pod" force deleted

Multiple preStop Commands

Using multiple preStop commands also seem to work:

nano mypostStartPod.yamlapiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container

image: busybox
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Pod is running']
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 1 ; echo In preStop > /dev/termination-log"]

restartPolicy: Never

Create the Pod.

kubectl create -f mypostStartPod.yaml

pod/myapp-pod created
Status: Succeeded
myapp-container:
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 08 Jan 2019 11:34:39 +0200
Finished: Tue, 08 Jan 2019 11:34:39 +0200
Ready: False
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True

Delete Pod.

kubectl delete -f mypostStartPod.yaml --force --grace-period=0pod "myapp-pod" force deleted

Multiple postStart and preStop Commands

This gives error messages.

nano mypostStartPod.yamlapiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container

image: busybox
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Pod is running']
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "sleep 1 ; echo In postStart > /dev/termination-log"]
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 1 ; echo In preStop > /dev/termination-log"]

restartPolicy: Never

Create the Pod.

kubectl create -f mypostStartPod.yaml

pod/myapp-pod created
Status: Succeeded
myapp-container:
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 08 Jan 2019 11:40:29 +0200
Finished: Tue, 08 Jan 2019 11:40:29 +0200
Ready: False
Restart Count: 0
Warning FailedPostStartHook 3s kubelet, minikube Exec lifecycle hook ([/bin/sh -c sleep 1 ; echo In postStart > /dev/termination-log]) for Container "myapp-container" in Pod "myapp-pod_default(6e795f76-1329-11e9-91d6-0800270102d2)" failed - error: command '/bin/sh -c sleep 1 ; echo In postStart > /dev/termination-log' exited with 126: , message: "cannot exec in a stopped state: unknown\r\n" Warning FailedPreStopHook 3s kubelet, minikube Exec lifecycle hook ([/bin/sh -c sleep 1 ; echo In preStop > /dev/termination-log]) for Container "myapp-container" in Pod "myapp-pod_default(6e795f76-1329-11e9-91d6-0800270102d2)" failed - error: command '/bin/sh -c sleep 1 ; echo In preStop > /dev/termination-log' exited with 126: , message: "cannot exec in a stopped state: unknown\r\n"

Delete Pod.

kubectl delete -f mypostStartPod.yaml --force --grace-period=0pod "myapp-pod" force deleted

Most amazing final test below …

Multiple postStart and preStop Commands — Sleep 10 Pod

If I add sleep 10 to the Pod command, this YAML runs perfectly: no error messages like above.

How does adding a sleep to the main Pod command fix syntax errors that even prevented test number 9.7 above from even trying to start

command: ['sh', '-c', 'echo The Pod is running ; sleep 10']

Edit mypostStartPod.yaml for your test Pod.

nano mypostStartPod.yamlapiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container

image: busybox
imagePullPolicy: IfNotPresent

command: ['sh', '-c', 'echo The Pod is running ; sleep 10']
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "sleep 1 ; echo In postStart > /dev/termination-log"]
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 1 ; echo In preStop > /dev/termination-log"]

restartPolicy: Never

After Pod ran, the termination log contains this:

Message:  In postStart  
Started: Tue, 08 Jan 2019 14:31:21 +0200
Finished: Tue, 08 Jan 2019 14:31:31 +0200

Pod also ran successfully for 10 seconds.

The final preStop command should have overwritten our termination log

No such output appears in our describe output.

command: [“/bin/sh”, “-c”, “sleep 1 ; echo In preStop > /dev/termination-log”]

Investigation completed. Delete Pod.

kubectl delete -f mypostStartPod.yaml --force --grace-period=0pod "myapp-pod" force deleted

During all these tests even badly failing postStart and preStop always result in a successful Pod completion. postStart and preStop problems are only warnings and does not result in Pod failure.

My Conclusion:

Stay away from postStart and preStop

Alternative interpretation: I made several errors in 9 tests above and it actually works perfect.

Interested developers can read more about postStart and preStop here.

Reference:https://www.alibabacloud.com/blog/pod-lifecycle-container-lifecycle-hooks-and-restartpolicy_594727?spm=a2c41.12820884.0.0

--

--

Alibaba Cloud
Alibaba Cloud

Written by Alibaba Cloud

Follow me to keep abreast with the latest technology news, industry insights, and developer trends. Alibaba Cloud website:https://www.alibabacloud.com

No responses yet