Эротические рассказы

IT Cloud. Eugeny ShtoltcЧитать онлайн книгу.

IT Cloud - Eugeny Shtoltc


Скачать книгу
Nginxlamp

      spec:

      selector:

      matchLabels:

      app: lamp

      replicas: 1

      template:

      metadata:

      labels:

      app: lamp

      spec:

      containers:

      – name: lamp

      image: mattrayner / lamp: latest-1604-php5

      ports:

      – containerPort: 80

      esschtolts @ cloudshell: ~ / bitrix (essch) $ cat loadbalancer.yaml

      apiVersion: v1

      kind: Service

      metadata:

      name: frontend

      spec:

      type: LoadBalancer

      ports:

      – name: front

      port: 80

      targetPort: 80

      selector:

      app: lamp

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods

      NAME READY STATUS RESTARTS AGE

      Nginxlamp-7fb6fdd47b-jttl8 2/2 Running 0 3m

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get svc

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

      frontend LoadBalancer 10.55.242.137 35.228.73.217 80: 32701 / TCP, 8080: 32568 / TCP 4m

      kubernetes ClusterIP 10.55.240.1 none> 443 / TCP 48m

      Now we can create identical copies of our clusters, for example, for Production and Develop, but balancing will not work as expected. The balancer will find PODs by label, and PODs in both production and Developer clusters correspond to this label. Also, placing clusters in different projects will not be an obstacle. Although, for many tasks, this is a big plus, but not in the case of a cluster for developers and production. The namespace is used to delimit the scope. We use them discreetly, when we list PODs without specifying a scope, we are displayed by default , but the PODs are not taken out of the system scope:

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get namespace

      NAME STATUS AGE

      default Active 5h

      kube-public Active 5h

      kube-system Active

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods –namespace = kube-system

      NAME READY STATUS RESTARTS AGE

      event-exporter-v0.2.3-85644fcdf-tdt7h 2/2 Running 0 5h

      fluentd-gcp-scaler-697b966945-bkqrm 1/1 Running 0 5h

      fluentd-gcp-v3.1.0-xgtw9 2/2 Running 0 5h

      heapster-v1.6.0-beta.1-5649d6ddc6-p549d 3/3 Running 0 5h

      kube-dns-548976df6c-8lvp6 4/4 Running 0 5h

      kube-dns-548976df6c-mcctq 4/4 Running 0 5h

      kube-dns-autoscaler-67c97c87fb-zzl9w 1/1 Running 0 5h

      kube-proxy-gke-bitrix-default-pool-38fa77e9-0wdx 1/1 Running 0 5h

      kube-proxy-gke-bitrix-default-pool-38fa77e9-wvrf 1/1 Running 0 5h

      l7-default-backend-5bc54cfb57-6qk4l 1/1 Running 0 5h

      metrics-server-v0.2.1-fd596d746-g452c 2/2 Running 0 5h

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods –namespace = default

      NAMEREADY STATUS RESTARTS AGE

      Nginxlamp-b5dcb7546-g8j5r 1/1 Running 0 4h

      Let's create a scope:

      esschtolts @ cloudshell: ~ / bitrix (essch) $ cat namespace.yaml

      apiVersion: v1

      kind: Namespace

      metadata:

      name: development

      labels:

      name: development

      esschtolts @ cloudshell: ~ (essch) $ kubectl create -f namespace.yaml

      namespace "development" created

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get namespace –show-labels

      NAME STATUS AGE LABELS

      default Active 5h none>

      development Active 16m name = development

      kube-public Active 5h none>

      kube-system Active 5h none>

      The essence of working with scope is that for specific clusters we set the scope and we can execute commands specifying it, while they will apply only to them. At the same time, except for the keys in commands such as kubectl get pods I do not appear in the scope, therefore the configuration files of controllers (Deployment, DaemonSet and others) and services (LoadBalancer, NodePort and others) do not appear, allowing them to be seamlessly transferred between the scope, which especially relevant for the development pipeline: developer server, test server, and production server. Scopes are set in the cluster context file $ HOME / .kube / config using the kubectl config view command . So, in my cluster context entry, the scope entry does not appear (default is default ):

      – context:

      cluster: gke_essch_europe-north1-a_bitrix

      user: gke_essch_europe-north1-a_bitrix

      name: gke_essch_europe-north1-a_bitrix

      You can see something like this:

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl config view -o jsonpath = '{. contexts [4]}'

      {gke_essch_europe-north1-a_bitrix {gke_essch_europe-north1-a_bitrix gke_essch_europe-north1-a_bitrix []}}

      Let's create a new context for this user and cluster:

      esschtolts @ cloudshell: ~ (essch) $ kubectl config set-context dev \

      > –namespace = development \

      > –cluster = gke_essch_europe-north1-a_bitrix \

      > –user = gke_essch_europe-north1-a_bitrix

      Context "dev" modified.

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl config set-context dev \

      > –namespace = development \

      > –cluster = gke_essch_europe-north1-a_bitrix \

      > –user = gke_essch_europe-north1-a_bitrix

      Context "dev" modified.

      As a result, the following was added:

      – context:

      cluster: gke_essch_europe-north1-a_bitrix

      namespace: development

      user: gke_essch_europe-north1-a_bitrix

      name: dev

      Now it remains to switch to it:

      esschtolts @ cloudshell: ~ (essch) $ kubectl config use-context dev

      Switched to context "dev".

      esschtolts @ cloudshell: ~ (essch) $ kubectl config current-context

      dev

      esschtolts @ cloudshell: ~ (essch) $ kubectl get pods

      No resources found.

      esschtolts @ cloudshell: ~ (essch) $ kubectl get pods –namespace = default

      NAMEREADY STATUS RESTARTS AGE

      Nginxlamp-b5dcb7546-krkm2 1/1 Running 0 10h

      You could add a namespace


Скачать книгу
Яндекс.Метрика