IT Cloud. Eugeny ShtoltcЧитать онлайн книгу.
= "curl –silent –fail localhost: 9200 / _cluster / health || exit 1" \
–-health-interval = 5s \
–-health-retries = 12 \
–-health-timeout = 20s \
{image}
For demonstration, we will use the file creation command. If the application has not reached the working state within the allotted time limit (set to 0) (for example, creating a file), then it is marked as working, but before that the specified number of checks is done:
vagrant @ ubuntu: ~ $ sudo docker run \
–d –name healt \
–-health-timeout = 0s \
–-health-interval = 5s \
–-health-retries = 3 \
–-health-cmd = "ls / halth" \
ubuntu bash -c 'sleep 1000'
c0041a8d973e74fe8c96a81b6f48f96756002485c74e51a1bd4b3bc9be0d9ec5
vagrant @ ubuntu: ~ $ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c0041a8d973e ubuntu "bash -c 'sleep 1000'" 4 seconds ago Up 3 seconds (health: starting) healt
vagrant @ ubuntu: ~ $ sleep 20
vagrant @ ubuntu: ~ $ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c0041a8d973e ubuntu "bash -c 'sleep 1000'" 38 seconds ago Up 37 seconds (unhealthy) healt
vagrant @ ubuntu: ~ $ sudo docker rm -f healt
healt
If at least one of the checks worked, then the container is marked as healthy immediately:
vagrant @ ubuntu: ~ $ sudo docker run \
–d –name healt \
–-health-timeout = 0s \
–-health-interval = 5s \
–-health-retries = 3 \
–-health-cmd = "ls / halth" \
ubuntu bash -c 'touch / halth && sleep 1000'
vagrant @ ubuntu: ~ $ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
160820d11933 ubuntu "bash -c 'touch / hal …" 4 seconds ago Up 2 seconds (health: starting) healt
vagrant @ ubuntu: ~ $ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
160820d11933 ubuntu "bash -c 'touch / hal …" 6 seconds ago Up 5 seconds (healthy) healt
vagrant @ ubuntu: ~ $ sudo docker rm -f healt
healt
In this case, the checks are repeated all the time at a given interval:
vagrant @ ubuntu: ~ $ sudo docker run \
–d –name healt \
–-health-timeout = 0s \
–-health-interval = 5s \
–-health-retries = 3 \
–-health-cmd = "ls / halth" \
ubuntu bash -c 'touch / halth && sleep 60 && rm -f / halth && sleep 60'
vagrant @ ubuntu: ~ $ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8ec3a4abf74b ubuntu "bash -c 'touch / hal …" 7 seconds ago Up 5 seconds (health: starting) healt
vagrant @ ubuntu: ~ $ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8ec3a4abf74b ubuntu "bash -c 'touch / hal …" 24 seconds ago Up 22 seconds (healthy) healt
vagrant @ ubuntu: ~ $ sleep 60
vagrant @ ubuntu: ~ $ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8ec3a4abf74b ubuntu "bash -c 'touch / hal …" About a minute ago Up About a minute (unhealthy) healt
Kubernetes provides (kubernetes.io/docs/tasks/configure-POD-container/configure-liveness-readiness-probes/) three tools that check the state of a container from outside. They are more important because they serve not only to inform, but also to manage the application life cycle, roll-forward and rollback of updates. Configuring them incorrectly can, and often does, cause the application to malfunction. So, if the liveness test is triggered before the application starts working, Kubernetes will kill the container, not allowing it to rise. Let's consider it in more detail. The liveness probe is used to determine the health of the application, and if the application crashes and does not respond to the liveness probe, Kubernetes reloads the container. As an example, we will take a shell test, due to the simplicity of the demonstration of work, but in practice it should be used only in extreme cases, for example, if the container is started not as a long-lived server, but as a JOB, doing its job and ending its existence, having achieved the result … For server checks, it is better to use HTTP probes, which already have a built-in dedicated proxy and do not require curl in the container and do not depend on external kube-proxy settings. When using databases, you must use a TCP probe, as they usually do not support the HTTP protocol. Let's create a long-lived container at www.katacoda.com/courses/kubernetes/playground:
controlplane $ cat << EOF> liveness.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness
spec:
containers:
– name: healtcheck
image: alpine: 3.5
args:
– / bin / sh
– -c
– touch / tmp / healthy; sleep 10; rm -rf / tmp / healthy; sleep 60
livenessProbe:
exec:
command:
– cat
– / tmp / healthy
initialDelaySeconds: 15
periodSeconds: 5
EOF
controlplane $ kubectl create -f liveness.yaml
pod / liveness created
controlplane $ kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness 1/1 Running 2 2m11s
controlplane $ kubectl describe pod / liveness | tail -n 10
Type Reason Age From Message
–– – – – –
Normal Scheduled 2m37s default-scheduler Successfully assigned default / liveness to node01
Normal Pulling 2m33s kubelet, node01 Pulling image "alpine: 3.5"
Normal Pulled 2m30s kubelet, node01 Successfully pulled image "alpine: 3.5"
Normal Created 33s (x3 over 2m30s) kubelet, node01 Created container healtcheck
Normal Started 33s (x3 over 2m30s) kubelet, node01 Started container healtcheck
Normal Pulled 33s (x2 over 93s) kubelet, node01 Container image "alpine: 3.5" already present on machine
Warning Unhealthy 3s (x9 over 2m13s) kubelet, node01 Liveness probe failed: cat: can't open '/ tmp / healthy': No such file or directory
Normal Killing 3s (x3 over 2m3s) kubelet, node01 Container healtcheck failed liveness probe, will be restarted
We see that the container is constantly being restarted.
controlplane $ cat << EOF> liveness.yaml
apiVersion: