Release of LXC 5.0 container management system

Canonical has released the LXC 5.0 Isolated Container Toolkit, which provides a runtime suitable for both running containers with a full system environment close to virtual machines and running unprivileged single application (OCI) containers. LXC refers to low-level toolkits that work at the level of individual containers. For centralized management of containers deployed in a cluster of several servers, the LXD system is being developed based on LXC. The LXC 5.0 branch is classified as a long-term support release, with updates being generated over a period of 5 years. The LXC code is written in C and distributed under the GPLv2 license.

LXC includes the liblxc library, a set of utilities (lxc-create, lxc-start, lxc-stop, lxc-ls, etc.), templates for building containers, and a set of bindings for various programming languages. Isolation is carried out using the regular mechanisms of the Linux kernel. The mechanism of namespaces is used to isolate processes, the ipc, uts network stack, user IDs and mount points. cgroups are used to limit resources. Kernel features such as Apparmor and SELinux profiles, Seccomp policies, Chroots (pivot_root), and capabilities are used to lower privileges and restrict access.

Major changes:

  • We switched from autotools to the Meson build system, which is also used to build projects such as X.Org Server, Mesa, Lighttpd, systemd, GStreamer, Wayland, GNOME, and GTK.
  • Added new options for cgroup configuration - lxc.cgroup.dir.container, lxc.cgroup.dir.monitor, lxc.cgroup.dir.monitor.pivot and lxc.cgroup.dir.container.inner - that allow you to explicitly define cgroup paths for container, monitor process and nested cgroup hierarchies.
  • Added support for namespaces for time (time namespaces) to bind a separate state of the system clock to the container, allowing you to use your own time in the container, different from the system one. For configuration, the options lxc.time.offset.boot and lxc.time.offset.monotonic are proposed, which allow you to define an offset for the container relative to the main system clock.
  • VLAN support has been implemented for virtual ethernet adapters (Veth). Options for managing VLANs are: veth.vlan.id to set the main VLAN and veth.vlan.tagged.id to bind additional tagged VLANs.
  • For virtual ethernet adapters, the ability to configure the size of the receive and transmit queues has been added using the new options veth.n_rxqueues and veth.n_txqueues.

Source: opennet.ru

Add a comment