Эротические рассказы

IT Cloud. Eugeny ShtoltcЧитать онлайн книгу.

IT Cloud - Eugeny Shtoltc


Скачать книгу
by name. Let's create a subnet:

      docker network create –driver overlay –subnet 10.10.1.0/24 –opt encrypted services

      Now we need to fill the cluster with containers. To do this, we create not a container, but a service, which is a template for creating containers on different nodes. The number of replicas to be created is specified during creation in the –replicas key , and the distribution is random over the nodes, but as uniform as possible. In addition to replicas, the service has a load balancer, the ports from which (input ports for all replicas) are proxied are specified in the -p switch , and Server Discovery (discovery of working replicas, determining their IP addresses, restarting) replicas are performed by the balancer independently.

      docker service create -p 80:80 –name busybox –replicas 2 –network services busybox sleep 3000

      Let's check the status of the docker service ls service , the status and uniformity of the distribution of the docker service ps busybox container replicas and its work wget -O- 10.10.1.2 . Service is a higher-level abstraction, which includes the container and organizing its update (by no means only), that is, to update the container parameters, you do not need to delete and create it, but simply update the service, and the service will first create a new one with the updated configuration, and only after it starts will delete the old one.

      Docker Swarm has its own load balancer Ingress load balacing, which balances the load between replicas on the port declared when creating the service, in our case it is port 80. The entry point can be any server with this port, but the response will be received from the server to which the request was forwarded by the balancer.

      We can also save data to the host machine, as in a regular container, there are two options for this:

      docker service create –mount type = bind, src = …, dst = .... name = .... ..... #

      docker service create –mount type = volume, src = …, dst = .... name = .... ..... # to host

      The application is deployed via Docker-compose running on nodes (replicas). When updating the Docker-compose configuration, you need to update the Docker stack, and the cluster will be consistently updated: one replica will be deleted and a new one will be created in its place in accordance with the new config, then the next one. If an error occurs, a rollback will be made to the previous configuration. Well, let's get started:

      docker stack deploy -c docker-compose.yml test_stack

      docker service update –label-add foo = bar Nginx docker service update –label-rm foo Nginx docker service update –publish-rm 80 Nginx docker node update –availability = drain swarm-node-3 swarm-node-3

      Docker swarm

      $ sudo docker pull swarm

      $ sudo docker run –rm swarm create

      docker secrete create test_secret docker service create –secret test_secret cat / run / secrets / test_secret health check: hello-check-cobbalt example pipeline: trevisCI -> Jenkins -> config -> https://www.youtube.com/watch? v = UgUuF_qZmWc https://www.youtube.com/watch?v=6uVgR9WPjYM

      Docker company-wide

      Let's take a company-wide view: we have containers and we have servers. It doesn't matter if these are two virtual machines and several containers or hundreds of servers and thousands of containers, the problem is to distribute containers on the servers, you need a system administrator and time, if there is little time and a lot of containers, you need a lot of system administrators, otherwise they will be suboptimally distributed. that is, the server (virtual machine) is working, but not at full capacity and the resources are being sold. In this situation, container orchestration systems are designed to optimize distribution and save human resources.

      Consider evolution:

      * The developer creates the necessary containers by hand.

      * The developer creates the necessary containers using previously prepared scripts.

      * The system administrator, using any configuration and deployment management system, such as Chef, Pupel, Ansible, Salt, sets the topology of the system. The topology indicates which container is located in which place.

      * Orchestration (schedulers) – semi-automatic distribution, maintenance of the state and reaction to system changes. For example: Google Kubernetes, Apache Mesos, Hashicorp Nomad, Docker Swarm mode, and YARN, which we'll cover. New ones also appear: Flocker (https://github.com/ClusterHQ/Flocker/), Helios (https://github.com/spotify/helios/).

      There is a native Docker-swarm solution. Of the adult systems, Kubernetes (Kubernetes) and Mesos showed the most popularity, while the former is a universal and completely ready-to-use system, and the latter is a complex of various projects combined into a single package and allowing you to replace or change their components. There is also a huge number of less popular solutions that are not promoted by such giants as Google, Twitter and others: Nomad, Scheduling, Scalling, Upgrades, Service Descovery, but we will not consider them. Here we will consider the most ready-made solution – Kubernetes, which has gained great popularity for its low entry threshold, support and sufficient flexibility in most cases, pushing Mesos into the niche of customizable solutions when customization and development is economically justified.

      Kubernetes has several ready-made configurations:

      * MiniKube – a cluster of one local machine, designed to overcome the threshold of entry and experiments;

      * kubeadm;

      * kops;

      * Kubernetes-Ansible;

      * microKubernetes;

      * OKD;

      * MicroK8s.

      To start the cluster yourself, you can use

      KubeSai – Free Kubernetes

      The smallest structural unit is called POD, which corresponds to the YML file in Docker-compose. The process of creating a POD, like other entities, is done declaratively: by writing or changing a configuration YML file and applying it to a cluster. And so, let's create a POD:

      # test_pod.yml

      # kybectl create -f test_pod.yaml

      containers:

      – name: test

      image: debian

      To run multiple replicas:

      # test_replica_controller.yml

      # kybectl create -f test_replica_controller.yml

      apiVersion: v1

      kind: ReplicationController

      metadata:

      name: Nginx

      spec:

      replicas: 3

      selector:

      app: Nginx // label by which the replica determines the presence of running containers

      template:

      containers:

      – name: test

      image: debian

      For balancing, a type of service (logical entity) is used – LoadBalancer, in addition to which there is also ClasterIP and Node Port:

      appVersion: v1

      kind: Service

      metadata:

      name: test_service

      apec:

      type: LoadBalanser

      ports:

      – port: 80

      – targetPort: 80

      – protocol: TCP

      – name: http

      selector:

      app: WEB

      Overlay network plugins (created and configured automatically): Contig, Flannel, GCE networking, Linux bridging, Calico, Kube-DNS, SkyDNS. #configmap apiVersion: v1 kind: ConfigMap metadata: name: config_name data:

      Similar to secrets in Docker-swarm, there is a


Скачать книгу
Яндекс.Метрика