Release of LXD 5.0 ​​container management system

Canonical has released the LXD 5.0 ​​container manager and LXCFS 5.0 virtual file system. The LXD code is written in Go and distributed under the Apache 2.0 license. The 5.0 branch is classified as a long-term support release - updates will be formed until June 2027.

The LXC toolkit is used as a runtime for launching as containers, which includes the liblxc library, a set of utilities (lxc-create, lxc-start, lxc-stop, lxc-ls, etc.), templates for building containers and a set of bindings for various programming languages. Isolation is carried out using the regular mechanisms of the Linux kernel. The mechanism of namespaces is used to isolate processes, the ipc, uts network stack, user IDs and mount points. cgroups are used to limit resources. Kernel features such as Apparmor and SELinux profiles, Seccomp policies, Chroots (pivot_root) and capabilities are used to lower privileges and restrict access.

In addition to LXC, LXD also uses components from the CRIU and QEMU projects. If LXC is a low-level toolkit for manipulation at the level of individual containers, then LXD provides tools for centralized management of containers deployed in a cluster of several servers. LXD is implemented as a background process that accepts requests over the network via a REST API and supports various storage backends (directory tree, ZFS, Btrfs, LVM), state snapshots, live migration of running containers from one machine to another, and tools for storing images containers. LXCFS is used to simulate the pseudo-FS containers /proc and /sys, and the virtualized cgroupfs view to make containers look like a regular independent system.

Key improvements:

  • Ability to hot plug and unplug drives and USB devices. In a virtual machine, a new disk is detected by the appearance of a new device on the SCSI bus, and a USB device is detected by generating a USB hotplug event.
  • The ability to launch LXD even if it is impossible to raise a network connection, for example, due to the lack of a necessary network device, is provided. Instead of displaying an error on startup, LXD now launches as many environments as it can under the current conditions, and the rest of the environments are launched after the network connection is established.
  • A new role of cluster members has been added - ovn-chassis, designed for clusters using OVN (Open Virtual Network) for network interaction (by assigning the ovn-chassis role, servers can be allocated to act as OVN routers).
  • An optimized mode for updating the contents of storage partitions is proposed. In past releases, the update consisted of first copying a container instance or partition, for example, using the send / receive functionality in zfs or btrfs, after which the created copy was synchronized by running the rsync program. To improve the efficiency of updating virtual machines, the new release uses advanced migration logic, in which, if the source and target servers use the same storage pool, snapshots and send/receive operations are automatically used instead of rsync.
  • The environment identification logic in cloud-init has been redesigned: UUID is now used as instance-id instead of environment names.
  • Added support for hooking the sched_setscheduler system call to allow unprivileged containers to change process priorities.
  • Implemented the lvm.thinpool_metadata_size option to control the metadata size in thinpool.
  • Redesigned network information file format for lxc. Added support for interface bonding, network bridges, VLANs, and OVNs.
  • Increased requirements for minimum component versions: Linux kernel 5.4, Go 1.18, LXC 4.0.x and QEMU 6.0.
  • LXCFS 5 added support for a unified cgroup hierarchy (cgroup2), implemented /proc/slabinfo and /sys/devices/system/cpu, and used the meson toolkit for assembly.

Source: opennet.ru

Add a comment