Эротические рассказы

IT Cloud. Eugeny ShtoltcЧитать онлайн книгу.

IT Cloud - Eugeny Shtoltc


Скачать книгу
this can be done through the UI. Next to the menu, click on the drop-down list and create a separate project. In the Kubernetes Engine section, choose to create a cluster. Let's give the name, 2CPU, the europe-north-1 zone (the data center in Finland is closest to St. Petersburg) and the latest version of Kubernetes. After creating the cluster, click on connect and select Cloud Shell. To create through the API, click the button in the upper right corner to display the console panel and enter in it:

      gcloud container clusters create mycluster –zone europe-north1-a

      After a while, it took me two and a half minutes, 3 virtual machines will be raised, the operating system is installed on them and the disk is mounted. Let's check:

      esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters list –filter = name = mycluster

      NAME LOCATION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS

      mycluster europe-north1-a 35.228.37.100 n1-standard-1 1.10.9-gke.5 3 RUNNING

      esschtolts @ cloudshell: ~ (essch) $ gcloud compute instances list

      NAME MACHINE_TYPE EXTERNAL_IP STATUS

      gke-mycluster-default-pool-43710ef9-0168 n1-standard-1 35.228.73.217 RUNNING

      gke-mycluster-default-pool-43710ef9-39ck n1-standard-1 35.228.75.47 RUNNING

      gke-mycluster-default-pool-43710ef9-g76k n1-standard-1 35.228.117.209 RUNNING

      Let's connect to the virtual machine:

      esschtolts @ cloudshell: ~ (essch) $ gcloud projects list

      PROJECT_ID NAME PROJECT_NUMBER

      agile-aleph-203917 My First Project 546748042692

      essch app 283762935665

      esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters get-credentials mycluster \

      –-zone europe-north1-a \

      –-project essch

      Fetching cluster endpoint and auth data.

      kubeconfig entry generated for mycluster.

      We don't have a cluster yet:

      esschtolts @ cloudshell: ~ (essch) $ kubectl get pods

      No resources found.

      Let's create a cluster:

      esschtolts @ cloudshell: ~ (essch) $ kubectl run Nginx –image = Nginx –replicas = 3

      deployment.apps "Nginx" created

      Let's check its composition:

      esschtolts @ cloudshell: ~ (essch) $ kubectl get deployments –selector = run = Nginx

      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

      Nginx 3 3 3 3 14s

      esschtolts @ cloudshell: ~ (essch) $ kubectl get pods –selector = run = Nginx

      NAME READY STATUS RESTARTS AGE

      Nginx-65899c769f-9whdx 1/1 Running 0 43s

      Nginx-65899c769f-szwtd 1/1 Running 0 43s

      Nginx-65899c769f-zs6g5 1/1 Running 0 43s

      Let's make sure that all three replicas of the cluster are distributed evenly across all three nodes:

      esschtolts @ cloudshell: ~ (essch) $ kubectl describe pod Nginx-65899c769f-9whdx | grep Node:

      Node: gke-mycluster-default-pool-43710ef9-g76k / 10.166.0.5

      esschtolts @ cloudshell: ~ (essch) $ kubectl describe pod Nginx-65899c769f-szwtd | grep Node:

      Node: gke-mycluster-default-pool-43710ef9-39ck / 10.166.0.4

      esschtolts @ cloudshell: ~ (essch) $ kubectl describe pod Nginx-65899c769f-zs6g5 | grep Node:

      Node: gke-mycluster-default-pool-43710ef9-g76k / 10.166.0.5

      Now let's install the load balancer:

      esschtolts @ cloudshell: ~ (essch) $ kubectl expose Deployment Nginx –type = "LoadBalancer" –port = 80

      service "Nginx" exposed

      Let's check that it was created:

      esschtolts @ cloudshell: ~ (essch) $ kubectl expose Deployment Nginx –type = "LoadBalancer" –port = 80

      service "Nginx" exposed

      esschtolts @ cloudshell: ~ (essch) $ kubectl get svc –selector = run = Nginx

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

      Nginx LoadBalancer 10.27.245.187 pending> 80: 31621 / TCP 11s

      esschtolts @ cloudshell: ~ (essch) $ sleep 60;

      esschtolts @ cloudshell: ~ (essch) $ kubectl get svc –selector = run = Nginx

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

      Nginx LoadBalancer 10.27.245.187 35.228.212.163 80: 31621 / TCP 1m

      Let's check its work:

      esschtolts @ cloudshell: ~ (essch) $ curl 35.228.212.163:80 2> \ dev \ null | grep h1

      <h1> Welcome to Nginx! </ h1>

      In order not to copy the full names every time, save them in variables (more about the JSONpath format in the Go documentation: https://golang.org/pkg/text/template/#pkg-overview):

      esschtolts @ cloudshell: ~ (essch) $ pod1 = $ (kubectl get pods -o jsonpath = {. items [0] .metadata.name});

      esschtolts @ cloudshell: ~ (essch) $ pod2 = $ (kubectl get pods -o jsonpath = {. items [1] .metadata.name});

      esschtolts @ cloudshell: ~ (essch) $ pod3 = $ (kubectl get pods -o jsonpath = {. items [2] .metadata.name});

      esschtolts @ cloudshell: ~ (essch) $ echo $ pod1 $ pod2 $ pod3

      Nginx-65899c769f-9whdx Nginx-65899c769f-szwtd Nginx-65899c769f-zs6g5

      Let's change the pages in each POD by copying the unique pages to each replica, and check the balancing by checking the distribution of requests across the POD:

      esschtolts @ cloudshell: ~ (essch) $ echo 1> test.html;

      esschtolts @ cloudshell: ~ (essch) $ kubectl cp test.html $ {pod1}: / usr / share / Nginx / html / index.html

      esschtolts @ cloudshell: ~ (essch) $ echo 2> test.html;

      esschtolts @ cloudshell: ~ (essch) $ kubectl cp test.html $ {pod2}: / usr / share / Nginx / html / index.html

      esschtolts @ cloudshell: ~ (essch) $ echo 3> test.html;

      esschtolts @ cloudshell: ~ (essch) $ kubectl cp test.html $ {pod3}: / usr / share / Nginx / html / index.html

      esschtolts @ cloudshell: ~ (essch) $ curl 35.228.212.163:80 && curl 35.228.212.163:80 && curl 35.228.212.163:80

      3

      2

      one

      esschtolts @ cloudshell: ~ (essch) $ curl 35.228.212.163:80 && curl 35.228.212.163:80 && curl 35.228.212.163:80

      3

      one

      one

      Let's check the failover of the cluster by deleting one POD:

      esschtolts @ cloudshell: ~ (essch) $ kubectl delete pod $ {pod1} && kubectl get pods && sleep 10 && kubectl get pods

      pod "Nginx-65899c769f-9whdx" deleted

      NAME READY STATUS RESTARTS AGE

      Nginx-65899c769f-42rd5 0/1 ContainerCreating 0 1s

      Nginx-65899c769f-9whdx 0/1 Terminating 0 54m

      Nginx-65899c769f-szwtd 1/1 Running 0 54m

      Nginx-65899c769f-zs6g5 1/1 Running 0 54m

      NAME READY STATUS RESTARTS AGE

      Nginx-65899c769f-42rd5 1/1 Running 0 12s

      Nginx-65899c769f-szwtd 1/1 Running 0 55m

      Nginx-65899c769f-zs6g5 1/1 Running


Скачать книгу
Яндекс.Метрика