Cloud Native Security. Chris BinnieЧитать онлайн книгу.
this works.
www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/techpaper/vmw-white-paper-secrty-vsphr-hyprvsr-uslet-101.pdf
This type of virtualization is not to be confused with paravirtualization, utilized by software such as Xen (xenproject.org
), where guest operating systems (OSs) can share hardware on a modified host OS.
NOTE
Xen is able to support hardware virtualization and paravirtualization. You can find more information on the subject here:
wiki.xen.org/wiki/Xen_Project_Software_Overview#PV_.28x86.29
In Figure 1.1 we can see the difference between containers and virtual machines. The processes shown are those relevant to a running application. Using our web server example again, one process might be running a web server listening on an HTTP port and another for HTTPS. As mentioned, to maintain the desired modularity, containers should service a specific single task (such as a web server). Normally, they will run a main application's process alone, along with any required associated processes.
Figure 1.1: How virtual machines and containers reside on a host
It should be clear that a Linux container is an entirely different animal than a VM. A saying that appears to have gained popularity at Red Hat during the explosion of container popularity noted earlier is that fundamentally “containers are Linux.” One interpretation of such a statement is that if you can appreciate how a Linux system is constructed at a nuts-and-bolts level and understand how to slice up a system into small segments, each of which uses native Linux components, then you will have a reasonable chance of understanding what containers are. For a more specific understanding of where that phrase comes from, visit this Red Hat blog page that explains the motivation behind the phrase: www.redhat.com/en/blog/containers-are-linux
.
From the perspective of an underlying host machine, the operating system is not only slicing memory up to share among containers, segmenting the networking stack, dividing up the filesystem, and restricting full access to the CPU; it is also hiding some of the processes that are running in the process table. How are all those aspects of a Linux system controlled centrally? Correct, via the kernel. During the massive proliferation of Docker containers, it became obvious that users did not fully appreciate how many of the components hung together.
For example, the Docker runtime has been improved over time with new security features (which we look at in more detail in Chapter 2, “Rootless Runtimes”); but in older versions, it needed to run as the root
user without exception. Why? It was because in order to slice up the system into suitable container-shaped chunks, superuser permissions were needed to convince the kernel to allow an application like Docker to do so.
One example scenario (which is common still to this day) that might convey why running as the root
user is such a problem involves the popular continuous integration/continuous development (CI/CD) automation tool, Jenkins.
TIP
Security in the CI/CD software development pipeline is the subject of the chapters in Part II of this book, “DevSecOps Tooling.”
Imagine that a Jenkins job is configured to run from a server somewhere that makes use of Docker Engine to run a new container; it has built the container image from the Dockerfile passed to it. Think for a second—even the seemingly simplest of tasks such as running a container always used to need root
permissions to split up a system's resources, from networking to filesystem access, from kernel namespaces to kernel control groups, and beyond. This meant you needed blind faith in the old (now infamous) password manager in Jenkins to look after the password that ran the Jenkins job. That is because as that job executed on the host, it would have root
user permissions.
What better way to examine how a system views a container—which, it is worth repeating, is definitely not a virtual machine—than by using some hands-on examples?
Container Components
There are typically a number of common components on a Linux system that enable the secure use of containers, although new features, or improvements to existing kernel and system features, are augmented periodically. These are Linux security features that allow containers to be bundled into a distinct unit and separated from other system resources. Such system and kernel features mean that most containers spawned, without adding any nonstandard options to disable such security features, have a limited impact on other containers and the underlying host. However, often unwittingly containers will run as the root
user or developers will open security features to ease their development process. Table 1.1 presents key components.
Table 1.1: Common Container Components
COMPONENT | DESCRIPTION |
---|---|
Kernel namespaces | A logical partitioning of kernel resources to reduce the visibility that processes receive on a system. |
Control croups | Functionality to limit usage of system resources such as I/O, CPU, RAM, and networking. Commonly called cgroups. |
SElinux/AppArmor | Mandatory Access Control (MAC) for enforcing security-based access control policies across numerous system facets such as filesystems, processes, and networking. Typically, SElinux is found on Red Hat Enterprise Linux (RHEL) derivatives and AppArmor on Debian derivatives. However, SElinux is popular on both, and AppArmor appears to be in experimental phase for RHEL derivatives such as CentOS. |
Seccomp |
Secure Computing (seccomp) allows the kernel to restrict numerous system calls; for the Docker perspective, see docs.docker.com/engine/security/seccomp .
|
Chroot | An isolation technique that uses a pseudo root directory so that processes running within the chroot lose visibility of other defined facets of a system. |
Kernel capabilities | Checking and restricting all system calls; more in the next section. |
Kernel Capabilities
To inspect the innards of a Linux system and how they relate to containers in practice, we need to look a little more at kernel capabilities. The kernel is important because before other security hardening techniques were introduced in later versions, Docker allowed (and still does) the ability to disable certain features, and open up specific, otherwise locked-down, kernel permissions.
You can find out about Linux kernel capabilities by using the command $ man capabilities
(or by visiting man7.org/linux/man-pages/man7/capabilities.7.html
).
The manual explains that capabilities offer a Linux system the ability to run permission checks against each system call (commonly called