IT Cloud. Eugeny ShtoltcЧитать онлайн книгу.
Pod
metadata:
name: liveness
spec:
containers:
– name: healtcheck
image: alpine: 3.5
args:
– / bin / sh
– -c
– touch / tmp / healthy; sleep 30; rm -rf / tmp / healthy; sleep 60
livenessProbe:
exec:
command:
– cat
– / tmp / healthy
initialDelaySeconds: 15
periodSeconds: 5
EOF
controlplane $ kubectl create -f liveness.yaml
pod / liveness created
controlplane $ kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness 1/1 Running 2 2m53s
controlplane $ kubectl describe pod / liveness | tail -n 15
SecretName: default-token-9v5mb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
–– – – – –
Normal Scheduled 3m44s default-scheduler Successfully assigned default / liveness to node01
Normal Pulled 68s (x3 over 3m35s) kubelet, node01 Container image "alpine: 3.5" already present on machine
Normal Created 68s (x3 over 3m35s) kubelet, node01 Created container healtcheck
Normal Started 68s (x3 over 3m34s) kubelet, node01 Started container healtcheck
Warning Unhealthy 23s (x9 over 3m3s) kubelet, node01 Liveness probe failed: cat: can't open '/ tmp / healthy': No such file or directory
Normal Killing 23s (x3 over 2m53s) kubelet, node01 Container healtcheck failed liveness probe, will be restarted
We also see on cluster events that when cat / tmp / health fails, the container is re-created:
controlplane $ kubectl get events
controlplane $ kubectl get events | grep pod / liveness
13m Normal Scheduled pod / liveness Successfully assigned default / liveness to node01
13m Normal Pulling pod / liveness Pulling image "alpine: 3.5"
13m Normal Pulled pod / liveness Successfully pulled image "alpine: 3.5"
10m Normal Created pod / liveness Created container healtcheck
10m Normal Started pod / liveness Started container healtcheck
10m Warning Unhealthy pod / liveness Liveness probe failed: cat: can't open '/ tmp / healthy': No such file or directory
10m Normal Killing pod / liveness Container healtcheck failed liveness probe, will be restarted
10m Normal Pulled pod / liveness Container image "alpine: 3.5" already present on machine
8m32s Normal Scheduled pod / liveness Successfully assigned default / liveness to node01
4m41s Normal Pulled pod / liveness Container image "alpine: 3.5" already present on machine
4m41s Normal Created pod / liveness Created container healtcheck
4m41s Normal Started pod / liveness Started container healtcheck
2m51s Warning Unhealthy pod / liveness Liveness probe failed: cat: can't open '/ tmp / healthy': No such file or directory
5m11s Normal Killing pod / liveness Container healtcheck failed liveness probe, will be restarted
Let's take a look at RadyNess trial. The availability of this test indicates that the application is ready to accept requests and the service can switch traffic to it:
controlplane $ cat << EOF> readiness.yaml
apiVersion: apps / v1
kind: Deployment
metadata:
name: readiness
spec:
replicas: 2
selector:
matchLabels:
app: readiness
template:
metadata:
labels:
app: readiness
spec:
containers:
– name: readiness
image: python
args:
– / bin / sh
– -c
– sleep 15 && (hostname> health) && python -m http.server 9000
readinessProbe:
exec:
command:
– cat
– / tmp / healthy
initialDelaySeconds: 1
periodSeconds: 5
EOF
controlplane $ kubectl create -f readiness.yaml
deployment.apps / readiness created
controlplane $ kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness-fd8d996dd-cfsdb 0/1 ContainerCreating 0 7s
readiness-fd8d996dd-sj8pl 0/1 ContainerCreating 0 7s
controlplane $ kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness-fd8d996dd-cfsdb 0/1 Running 0 6m29s
readiness-fd8d996dd-sj8pl 0/1 Running 0 6m29s
controlplane $ kubectl exec -it readiness-fd8d996dd-cfsdb – curl localhost: 9000 / health
readiness-fd8d996dd-cfsdb
Our containers work great. Let's add traffic to them:
controlplane $ kubectl expose deploy readiness \
–-type = LoadBalancer \
–-name = readiness \
–-port = 9000 \
–-target-port = 9000
service / readiness exposed
controlplane $ kubectl get svc readiness
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE
readiness LoadBalancer 10.98.36.51 <pending> 9000: 32355 / TCP 98s
controlplane $ curl localhost: 9000
controlplane $ for i in {1..5}; do curl $ IP: 9000 / health; done
one
2
3
four
five
Each container has a delay. Let's check what happens if one of the containers is restarted – whether traffic will be redirected to it:
controlplane $ kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness-5dd64c6c79-9vq62 0/1 CrashLoopBackOff 6 15m
readiness-5dd64c6c79-sblvl