IT Cloud. Eugeny ShtoltcЧитать онлайн книгу.
in the occupied disk space from the number of containers, and an exponential increase from product versions, despite the fact that one instance takes up a lot of space. That is, each sandbox contains a program emulation instance, an operating system image, all installed programs and developer code, which is quite a lot. The only thing that is one thing is the installation of the virtual machine itself, and then only within the framework of one physical server. For example, if we have 10 developers, then the size will be 10 times larger, with 3 product versions – 30.
All these disadvantages for the WEB are intended to be solved by Docker. Hereinafter, we will talk about Docker for Linux, and will not consider the slightly different kernel implementation for FreeBSD, and the implementation for Windows 10 Professional, within which either a stripped-down Linux kernel or independent development of Windows containerization is purchased. The main idea is not to create a layer (virtualization of hardware, its own OS and hypervisor), but to differentiate rights. You do not multiply to put in the MS Windows container, but you can put both RedHut and Debian, since the kernel is one, and the differences are in the files, creating a sandbox (separate directories and prohibiting going beyond its chapels) with these files. Also, we are talking about WEB solutions, since for native solutions, problems may arise when the program needs to have exclusive access from the container (Docker sandbox) to the OS kernel, for example, for native rendering of windows. You can also limit the amount of memory, processor time, number of processes.
Lightweight Virtualization or Lightweight Isolation – A Look at Docker Implementation
Let's take a look at the history of the appearance of the prerequisites for the emergence of Docker, namely the prerequisites, since Docker itself does not implement isolation, let alone virtualization, but organizes work with it from the first. Unlike virtualization, which resembles a hangar with its own world and its own foundation, on which you can impose whatever your heart desires, for example, we give out a lawn, then for isolation you can draw an analogy with a fence. Isolation appeared in the Linux kernel gradually, in parts responsible for different levels, and in parallel, programs appeared to provide an interface and the concept of applying this isolation in real projects. Isolation consists of 6 types of resource limitation.
The first in the kernel was the isolation of the file system, which allows you to create a sandbox using the chroot command back in 1979, from outside the sandbox is completely visible, but when you go inside the folder over which the command is executed becomes root, and you will not be able to return. The next one was the delimitation of processes, so the sandbox exists and the host system as long as the process with pid (number) 1 exists. For the sandbox it is its own, outside the sandbox it is a normal process. Further, the steel distinctions of CGroups were tightened: user groups, memory and others. All this exists in the kernel of any Linux, regardless of whether you have Docker installed or not. Throughout history, attempts were made, OpenSource and commercial, to create containers, developing the functionality themselves, and similar solutions found their users, but they did not penetrate the masses. Docker at the beginning of its existence used a fairly stable but difficult to use LXC containerization solution. Gradually he replaced LXC with native CGroup. Docker also supports the salinity of its image (more on that later), but does not implement it itself, but uses UnuonFS (UFS).
Docker and disk space
Since Docker does not implement functionality, but uses the built-in Linux kernel, and does not have a graphical interface under the hood, it itself takes up very little space.
Since the container uses the host OS kernel, the base image (usually the OS) contains only complementary packages. So the Debian Docker image is 125Mb, and the ISO image is 290Mb. To check that one core is used in the container, we will display information about it: uname -a or cat / proc / version , and information about the container environment itself cat / etc / ussue
Docker builds an image based on the instructions in the Dockerfile, which can be located remotely or locally, and can be created from it at any time. Therefore, if you are not using the image at the moment, then you can delete it. An exception is the image created from the container using the Docker commit command , but it is not very correct to create this way, and you can always select from the Dockerfile image with the Docker history command and delete the image. The advantage of storing images is that you do not need to wait while it is being created: the OS and libraries are downloaded.
Docker itself uses an image called Image, which is built based on the instructions in the Dockerfile. When you create several containers on its basis, the space practically does not increase, since the container is just a process and config settings. When changing files in the container, the files themselves are not saved, but the changes made are saved, which will be deleted after the container is transferred. This guarantees in 99% of cases a completely identical environment, and as a result, it is not important to place preparatory operations common for all containers for installing specific programs in the image, a side effect of which is the absence of their duplication. To be able to save data, folders and files are mounted to the host (parent) system. Therefore, you can run a hundred or more containers on a regular computer, and you will not see any changes in the free local on the disk. At the same time, if the developers use git, and how can they do without it, and they often accumulate, then there may be no need to mount the folders with the source code.
The docker image is not a monolithic image of your product, but a layer cake of images, the layers of which are cached. This allows you to significantly save time for creating an image. Caching can be disabled with the build –no-cache = true command switch if Docker does not recognize that the data is mutable. Docker can see the changes in the ADD statement adding a file from the host system to the container by the hash of the file. So, if you create two containers, one with NGINX and the other with MySQL, both of which are based on Ubuntu 14.04, there will be three image layers: MySQL, NGINX, and Ubuntu. The images can be viewed with the Docker history command . It also works for your projects – when copying 2 versions of the code to your image using the ADD command with your product, you will have 3 layers and two images: the base one, with the code of the first version and the code of the second version, regardless of the number of containers. The number of layers is limited to 127. It is important to note that when cloning a project, you need to specify the version, not just git clone , but git clone –branch v1 and git clone –branch v2 , otherwise Docker will cache the layer created by the Git Clone command and when creating the second we get the same image.
Docker does not consume resources, but only limits them if it is specified in the settings when creating the container (for memory, the key is m, for the processor – c). Since Docker supports different containerization filesystems, there is no unified interface to customize. But, in any case, the resource is consumed as much as required, and not as much as allocated, as in virtual machines.
Such concern about the occupied disk space and the weightlessness of the containers themselves entails irresponsibility in downloading images and creating containers.
Garbage collection by container
Due to the fact that the container provides much more opportunities than a virtual machine, the situation is complicated by leaving garbage after Docker is running. The problem is solved simply by running the moore collector, which appeared in version 1.13 or more difficult for earlier versions by writing a script you need.
Just as simple to create a docker container run name_image , it is also simple to delete docker rm -f id_container . Often, in order to just experiment, it is convenient to run the container interactively docker run -ti name_image bash and we will immediately find ourselves in the container. When we exit it with Cntl + D , it will be stopped. To automatically remove the output field, use the –rm parameter . But because containers are so weightless, they are so easy to create, they are often thrown and not removed, which leads to their explosive growth. You can look at the running ones with the docker ps command , and at the stopped ones – docker ps -a . To prevent this, use the docker containers prune garbage collector , which was introduced in version 1.13 and which will remove all stopped containers. For earlier versions, use the docker rm $ script (docker ps -q -f status = exited) . If its launch is not desirable for you, most likely you are using Docker incorrectly, since it is almost as quick and easy to pull a container from an image as to restore it to work. If you need to save state in the container, then this is done by mounting folders or volumes.
A