VM or Docker?

How to understand that you need Docker and not a VM? You need to determine what exactly you want to isolate. If you want to isolate a system with guaranteed resources and virtual hardware, then the choice should fall on the VM. If you need to isolate running applications as separate system processes, you will need Docker.

So what is the difference between Docker containers and VMs?

Virtual machine (VM) is a virtual computer with all virtual devices and a virtual hard disk, on which a new independent OS is installed along with virtual device drivers, memory management and other components. That is, we get an abstraction of physical hardware that allows you to run many virtual computers on one computer.
An installed VM can take up disk space in different ways:

  • fixed hard disk space, which allows faster access to the virtual hard disk and avoids file fragmentation;
  • dynamic memory allocation. When installing additional applications, memory will be dynamically allocated for them until it reaches the maximum amount allocated to it.

The more virtual machines on the server, the more space they take up, and also require the constant support of the environment required for your application to work.

Docker is a software for building applications based on containers. Containers and virtual machines have similar benefits, but work differently. Containers take up less space, because overuse more shared resources of the host system than the VM, because unlike VM, provides virtualization at the OS level, not hardware. This approach provides less memory footprint, faster deployment, and easier scaling.

The container provides a more efficient mechanism for encapsulating applications by providing the necessary interfaces to the host system. This feature allows containers to share the core of the system, where each of the containers runs as a separate process of the main OS, which has its own set of memory areas (its own virtual address space). Since each container's virtual address space is its own, data belonging to different memory areas cannot be changed.
The native OS for Docker is Linux (Docker can also be used on Windows and MacOS), it uses its main advantages, which allow it to organize a split kernel. The launch of Docker containers on Windows will take place inside a Linux virtual machine, because containers share the OS of the host system and the main OS for them is Linux.

Container - how does it work?

Container is an abstraction at the application level that combines code and dependencies. Containers are always created from images, adding a writable top layer and initializing various parameters. Because a container has its own write layer and all changes are stored in that layer, multiple containers can share access to the same master image.

Each container can be configured through a file in the docker-compose project included in the main solution, docker-compose.yml. There you can set various parameters such as container name, ports, identifiers, resource limits, dependencies between other containers. If you do not specify a container name in the settings, then Docker will create a new container each time, assigning a name to it randomly.

When a container is started from an image, Docker mounts the read/write filesystem on top of any layers below. This is where all the processes we want our Docker container to run will run.

When Docker first starts a container, the initial read/write layer is empty. When changes occur, they are applied to that layer; for example, if you want to modify a file, that file will be copied from the read-only layer below to the read-write layer.
The read-only version of the file will still exist, but is now hidden under the copy. Volumes are used to store data, regardless of the life cycle of the container. Volumes are initialized when a container is created.

How is the image associated with the container?

Form - the main element for each container. The image is created from a Dockerfile added to the project and is a set of file systems (layers) layered on top of each other and grouped together, available for reading only; the maximum number of layers is 127.

At the heart of each image is a base image, which is specified by the FROM command - the entry point when generating a Dockerfile image. Each layer is a readonly layer and is represented by a single command that modifies the file system, written in a Dockerfile.
To combine these layers into a single image, Docker uses the Advanced multi-layered Union file system (AuFS is built on top of UnionFS), allowing different files and directories from different file layers to transparently overlap, creating an associated file system.

Layers contain metadata that allows you to store related information about each layer at runtime and build time. Each layer contains a link to the next layer, if the layer has no link, then this is the topmost layer in the image.

Dockerfile may contain commands such as:

  • FROM - entry point in the formation of the image;
  • MAINTAINER - the name of the owner of the image;
  • RUN - command execution during image assembly;
  • ADD - copying the host file to a new image, if you specify a URL file, Docker will download it to the specified directory;
  • ENV - environment variables;
  • CMD - starts the creation of a new container based on the image;
  • ENTRYPOINT - The command is executed when the container is started.
  • WORKDIR is the working directory for executing the CMD command.
  • USER - Sets the UID for the container created from the image.
  • VOLUME - Mounts the host directory to the container.
  • EXPOSE is a set of ports listened on in the container.

How does UnionFS work?

UnionFS β€” service stack file system (FS) for Linux and FreeBSD. This FS implements the copy-on-write (Copy-On-Write, COW) mechanism. The working unit of UnionFS is a layer, each layer should be considered as a separate full-fledged file system with a directory hierarchy from the root itself. UnionFS creates a union mount for other filesystems and allows you to transparently merge files and directories from different filesystems (called branches) into a single linked filesystem, transparently to the user.

The contents of directories with the same paths will be displayed together in one combined directory (in the same namespace) of the resulting file system.

UnionFS combines layers based on the following principles:

  • one of the layers becomes a top-level layer, the second and subsequent layers become lower-level layers;
  • layer objects are accessible to the user β€œfrom top to bottom”, i.e. if the requested object is in the "upper" layer, it is returned, regardless of the presence of an object with the same name in the "lower" layer; otherwise, the "bottom" layer object is returned; if the requested object is neither there nor there, the error "No such file or directory" is returned;
  • the working layer is the "top", that is, all user actions to change data are reflected only on the top-level layer, without affecting the contents of lower-level layers.

Docker is the most common technology for using containers in application work. It has become the standard in this area, building on the cgroups and namespaces provided by the Linux kernel.

Docker allows us to quickly deploy applications and make the best use of the file system by sharing the OS kernel between all containers, running as separate OS processes.

Source: habr.com

Add a comment