Lxd is the next generation system container manager, so it says source. It offers a user interface similar to virtual machines, but using Linux containers instead.
LXD core is a privileged daemon (a service running as root) that provides a REST API via a local unix socket, and also via the network if the appropriate configuration is set. Clients such as the command line tool provided with LXD send requests through this REST API. This means that whether you're accessing the local host or a remote host, everything works the same way.
In this article, we will not dwell on the concepts of LXD, we will not consider all the available features outlined in the documentation, including the recent implementation in the latest versions of LXD of supporting QEMU virtual machines in parallel with containers. Instead, we'll only learn the basics of container management - set up storage pools, networking, run a container, apply resource limits, and look at how to use snapshots so you can get a basic understanding of LXD and use containers on Linux.
For complete information, please refer to the official source:
This means that two packages will be installed at once, one system package, and the other as a snap package. Installing two packages on a system can create a problem where the system package can become an orphan if the snap package is removed by the snap package manager.
Find a package lxd in a snap repository, you can use the following command:
snap find lxd
Name Version Summary
lxd 3.21 System container manager and API
lxd-demo-server 0+git.6d54658 Online software demo sessions using LXD
nova ocata OpenStack Compute Service (nova)
nova-hypervisor ocata OpenStack Compute Service - KVM Hypervisor (nova)
distrobuilder 1.0 Image builder for LXC and LXD
fabrica 0.1 Build snaps by simply pointing a web form to...
satellite 0.1.2 Advanced scalable Open source intelligence platform
By running the command list you can make sure the package lxd not installed yet:
snap list
Name Version Rev Tracking Publisher Notes
core 16-2.43.3 8689 stable canonicalβ core
Despite the fact that LXD is a snap package, you need to install it through the system package lxd, which will create the corresponding group in the system, the necessary utilities in /usr/bin etc.
sudo apt update
sudo apt install lxd
Make sure the package is installed as a snap package:
snap list
Name Version Rev Tracking Publisher Notes
core 16-2.43.3 8689 stable canonicalβ core
lxd 3.21 13474 stable/β¦ canonicalβ -
To install the LXD package in the system, you need to run the following commands, the first one updates the list of packages in the system available in the repository, the second one directly installs the package:
sudo pacman -Syyu && sudo pacman -S lxd
After installing the package, in order to manage LXD as a regular user, it must be added to the system group lxd:
sudo usermod -a -G lxd user1
Make sure the user user1 added to group lxd:
id -Gn user1
user1 adm dialout cdrom floppy sudo audio dip video plugdev netdev lxd
If the group lxd is not visible in the list, then you need to reactivate the user session. To do this, you need to log out and log in as the same user.
Activate in systemd loading the LXD service at system startup:
Before starting the initialization, we need to understand how the storage in LXD is logically arranged.
Storage (Storage) composed from one or more storage pool which uses one of the supported file systems such as ZFS, BTRFS, LVM, or regular directories. Every storage pool divided into volumesStorage Volume) that contain images, containers, or data for other purposes.
Images are specially built distributions without a Linux kernel and available from external sources
Containers - these are deployed distributions from images, ready for operation
Snapshots - these are snapshots of the state of containers to which you can return
To manage storage in LXD, use the command lxc storage help on which you can get by specifying the key - lxc storage --help
The following command displays a list of all storage pool in LXD storage:
lxc storage list
+---------+-------------+--------+--------------------------------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+---------+-------------+--------+--------------------------------+---------+
| hddpool | | btrfs | /dev/loop1 | 2 |
+---------+-------------+--------+--------------------------------+---------+
| ssdpool | | btrfs | /var/lib/lxd/disks/ssdpool.img | 4 |
+---------+-------------+--------+--------------------------------+---------+
To view a list of all Storage Volume in the selected storage pool serving team lxc storage volume list:
lxc storage volume list hddpool
+-------+----------------------------------+-------------+---------+
| TYPE | NAME | DESCRIPTION | USED BY |
+-------+----------------------------------+-------------+---------+
| image | ebd565585223487526ddb3607f515... | | 1 |
+-------+----------------------------------+-------------+---------+
lxc storage volume list ssdpool
+-----------+----------------------------------+-------------+---------+
| TYPE | NAME | DESCRIPTION | USED BY |
+-----------+----------------------------------+-------------+---------+
| container | alp3 | | 1 |
+-----------+----------------------------------+-------------+---------+
| container | jupyter | | 1 |
+-----------+----------------------------------+-------------+---------+
| image | ebd565585223487526ddb3607f515... | | 1 |
+-----------+----------------------------------+-------------+---------+
Also, if for storage pool when creating, the BTRFS file system was selected, then get a list Storage Volume or subvolumes in the interpretation of BTRFS, you can use the toolkit of this file system:
sudo btrfs subvolume list -p /var/lib/lxd/storage-pools/hddpool
ID 257 gen 818 parent 5 top level 5 path images/ebd565585223487526ddb3607f5156e875c15a89e21b61ef004132196da6a0a3
sudo btrfs subvolume list -p /var/lib/lxd/storage-pools/ssdpool
ID 257 gen 1820 parent 5 top level 5 path images/ebd565585223487526ddb3607f5156e875c15a89e21b61ef004132196da6a0a3
ID 260 gen 1819 parent 5 top level 5 path containers/jupyter
ID 263 gen 1820 parent 5 top level 5 path containers/alp3
Before creating and using containers, you must perform a general LXD initialization that creates and configures the network as well as storage. This can be done manually using standard client commands that are available in the list by calling the command lxc --help or using the initialization wizard lxd init answering a few questions.
During initialization, LXD asks several questions, among which will be determining the type of file system for the default storage pool. By default, the BTRFS file system is selected for it. It will be impossible to change to another FS after creation. To select a FS, it is proposed feature comparison table:
Initializing the Network and Storage Pool Using the Wizard ^
The next command we'll look at is to set up the main components of LXD by answering simple questions using the initialization wizard.
Run command lxc init and enter the answers to the questions after the colon sign as shown in the example below, or change them according to your conditions:
lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: ssdpool
Name of the storage backend to use (lvm, btrfs, dir) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=15GB]: 10GB
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, βautoβ or βnoneβ) [default=auto]: 10.0.5.1/24
Would you like LXD to NAT IPv4 traffic on your bridge? [default=yes]:
What IPv6 address should be used? (CIDR subnet notation, βautoβ or βnoneβ) [default=auto]: none
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] no
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
In the previous step we created storage pool which was named ssdpool and whose file is located on my system at /var/lib/lxd/disks/ssdpool.img. This file system address corresponds to the physical SSD in my PC.
The following actions, in order to increase understanding of the role played by storage pool in the repository, we will create a second storage pool which will be physically located on another type of disk, on the HDD. The problem is that LXD does not allow you to create storage pool out of address /var/lib/lxd/disks/ and even symlinks won't work, see developer's answer. We can get around this limitation when initializing / formatting storage pool by specifying the value as a block device instead of the path to the loopback file by specifying this in the key source.
So before creating storage pool you need to define a loopback file or an existing partition on your file system that it will use. To do this, we will create and use a file that will be limited to 10GB in size:
dd if=/dev/zero of=/mnt/work/lxd/hddpool.img bs=1MB count=10000
10000+0 records in
10000+0 records out
10000000000 bytes (10 GB, 9,3 GiB) copied, 38,4414 s, 260 MB/s
Connect the loopback file to a free loopback device:
thanks to the key --show executing the command returns the name of the device to which our loopback file is connected to on the screen. If necessary, we can display a list of all busy devices of this type to make sure that our actions are correct:
From the list, you can find that in the device /dev/loop1 connected loopback file /mnt/work/lxd/hddpool.img, and in the device /dev/loop0 connected loopback file /var/lib/lxd/disks/ssdpool.img which corresponds to the default storage pool.
The following command creates a new storage pool in LXD based on the loopback file we just prepared. LXD will format the loopback file /mnt/work/lxd/hddpool.img in the device /dev/loop1 under the BTRFS file system:
Auto-insert loopback file into slot of loopback device ^
We have one small problem, when rebooting the host system, the file /mnt/work/lxd/hddpool.img "flies out" of the device /dev/loop1 and the LXD service will crash on boot because it won't see it in that device. To solve this problem, you need to create a system service that will insert this file into the device /dev/loop1 when the host system boots.
Let's create unit type file service Π² /etc/systemd/system/ for the SystemD init system:
cat << EOF | sudo tee -a /etc/systemd/system/lxd-hddpool.service
[Unit]
Description=Losetup LXD Storage Pool (hddpool)
After=local-fs.target
[Service]
Type=oneshot
ExecStart=/sbin/losetup /dev/loop1 /mnt/work/lxd/hddpool.img
RemainAfterExit=true
[Install]
WantedBy=local-fs.target
EOF
Activate the service:
sudo systemctl enable lxd-hddpool
Created symlink /etc/systemd/system/local-fs.target.wants/lxd-hddpool.service β /etc/systemd/system/lxd-hddpool.service.
After restarting the host system, check the status of the service:
systemctl status lxd-hddpool.service
β lxd-hddpool.service - Losetup LXD Storage Pool (hddpool)
Loaded: loaded (/etc/systemd/system/lxd-hddpool.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2020-04-08 03:43:53 MSK; 1min 37s ago
Process: 711 ExecStart=/sbin/losetup /dev/loop1 /mnt/work/lxd/hddpool.img (code=exited, status=0/SUCCESS)
Main PID: 711 (code=exited, status=0/SUCCESS)
Π°ΠΏΡ 08 03:43:52 manjaro systemd[1]: Starting Losetup LXD Storage Pool (hddpool)...
Π°ΠΏΡ 08 03:43:53 manjaro systemd[1]: Finished Losetup LXD Storage Pool (hddpool).
From the output, we can verify that the state of the service is active, despite the fact that the execution of our script from one command ended, this allowed us to do the option RemainAfterExit=true.
Since all container processes actually run in isolation on the host system using its kernel, to further protect the access of container processes to the host system, LXD offers process privileges, where:
Privileged Containers are containers in which processes with UID and GID correspond to the same owner as on the host system. For example, a process running in a container with UID 0 has all the same permissions as a host system process with UID 0. In other words, the root user in the container has all rights not only in the container, but also on the host system if he can go outside the container's isolated namespace.
Unprivileged containers are containers in which processes belong to the owner UID and GID with a number from 0 to 65535, but for the host system, the owner is masked using the added SubUID and SubGID bits, respectively. For example, a user with UID=0 in a container will be seen on the host system as SubUID + UID. This protects the host system because if any process in the container is able to exit its isolated namespace, it can only interact with the host system as a process with an unknown, very high UID/GID.
By default, newly created containers have an unprivileged status, so we must define a SubUID and a SubGID.
Let's create two configuration files in which we set the mask for SubUID and SubGID, respectively:
Since we previously initialized the network using the initialization wizard lxd init and created a network device lxdbr0, then in this section we will just get acquainted with the network in LXD and how to create a virtual switch (network bridge, bridge) using the client command.
The following diagram demonstrates how a switch (network bridge, bridge) connects a host and containers to a network:
Containers can communicate over the network with other containers or the host on which these containers are served. To do this, you need to link the virtual network cards of the containers with the virtual switch. Let's create the switch first, and the container's network interfaces will be linked in later chapters after the container itself has been created.
The following command creates a switch with a subnet 10.0.5.0/24 and IPv4 address 10.0.5.1/24and also includes ipv4.nat so that containers can access the internet through the host using the NAT service:
Each container in LXD has its own configuration and can extend it with globally declared configurations called configuration profiles. Applying configuration profiles to a container has a cascading model, the following example demonstrates this:
In this example, three profiles are created in the LXD system: default, hddpool ΠΈ hostfs. All three profiles are applied to a container that has a local configuration (gray area). Profile default has a device root whose parameter pool is ssdpool, but thanks to the cascading configuration application model, we can apply a profile to the container hddpool whose parameter pool will override the same parameter from the profile default and the container will get the device configuration root with parameter pool equal hddpool, and the profile hostfs simply adds a new device to the container.
To see the list of available configuration profiles, use the following command:
lxc profile list
+---------+---------+
| NAME | USED BY |
+---------+---------+
| default | 1 |
+---------+---------+
| hddroot | 0 |
+---------+---------+
| ssdroot | 1 |
+---------+---------+
A complete list of available commands for working with a profile can be obtained by adding the key --help:
lxc profile --help
Description:
Manage profiles
Usage:
lxc profile [command]
Available Commands:
add Add profiles to instances
assign Assign sets of profiles to instances
copy Copy profiles
create Create profiles
delete Delete profiles
device Manage instance devices
edit Edit profile configurations as YAML
get Get values for profile configuration keys
list List profiles
remove Remove profiles from instances
rename Rename profiles
set Set profile configuration keys
show Show profile configurations
unset Unset profile configuration keys
Default configuration profile default does not have a network card configuration for the container and all newly created containers do not have a network, for them it is necessary to create local (dedicated) network devices with a separate command, but we can create a global network device in the configuration profile that will be shared between all containers using this profile. Thus, immediately after the command to create a new container, they will have a network with access to the network. However, there are no restrictions, we can always create a local network device later if necessary.
The following command will add a device to the configuration profile eth0 type nic connected to the network lxdbr0:
lxc profile device add default eth0 nic network=lxdbr0 name=eth0
It is important to note that since we actually added the device to the configuration profile, if we specified a static IP address in the device, then all containers that will use this profile will share the same IP address. If there is a need to create a container with a static IP address dedicated to the container, then you should create a network device configuration at the container level (local configuration) with the IP address parameter, and not at the profile level.
Let's check the profile:
lxc profile show default
config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: ssdpool
type: disk
name: default
used_by: []
In this profile, we can see that two devices (devices) will be created for all newly created containers:
eth0 β device type nic connected to a switch (network bridge) lxdbr0
root β device type disk which uses the storage pool ssdpool
To use previously created storage pool containers, create a configuration profile ssdroot in which we add a device of type disk with mount point / (root) using the previously created storage pool β ssdpool:
Containers are created from images that are specially built distributions that do not have a Linux kernel. Therefore, before running a container, it must be deployed from this image. The source of images is a local repository into which images are loaded from external repositories.
To start using the container, you need to add an image from the global repository to the local one. local:. Now the local repository is empty, we will be convinced of this by the command lxc image list. If the method list do not specify a repository, then the local repository will be used by default β local:
lxc image list local:
+-------+-------------+--------+-------------+--------------+------+------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE |
+-------+-------------+--------+-------------+--------------+------+------+
Image management in the repository is performed by the following methods:
Team
Description
lxc image alias
Manage image aliases
lxc image copy
Copy images between servers
lxc image delete
Delete images
lxc image edit
Edit image properties
lxc image export
Export and download images
lxc image import
Import images into the image store
lxc image info
Show useful information about images
lxc image list
List images
lxc image refresh
Refresh images
lxc image Show
Show image properties
Copy the image to the local repository from the global images::
In addition to the interactive mode, LXD also supports a non-interactive configuration installation mode, this is when the configuration is specified as a YAML file, a special format that allows you to install the entire configuration at one time, bypassing the execution of many interactive commands that were discussed above in this article, including network configuration, creating configuration profiles, etc. Here we will not consider this area, you can familiarize yourself with this. in documentation.
Next interactive command lxc config which we will look at, allows you to set the configuration. For example, in order to prevent images uploaded to the local repository from being automatically updated from the global repositories, we can enable this behavior with the following command:
The command to create a container is lxc init which values ββare passed ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΠΉ:ΠΎΠ±ΡΠ°Π· and then the desired ID for the container. Repository can be specified as local local: as well as any global. If the repository is not specified, then by default, the local repository is used to search for the image. If the image is specified from the global repository, then the image will first be uploaded to the local repository and then used to create the container.
Let's run the following command to create our first container:
lxc init alpine3 alp --storage=hddpool --profile=default --profile=hddroot
Let's analyze in order the command keys that we use here:
alpine3 - An alias (alias) is specified for the image that was previously uploaded to the local repository. If the alias was not created for this image, then you can always refer to the image by its Fingerprint which is displayed in the table.
alp β ID for the container is set
--storage - This key indicates in which storage pool container will be created
--profile - These keys cascade the configuration from previously created configuration profiles to the container
We start the container, which starts running the distribution's init system:
lxc start alp
Also, you can use the command lxc launch which allows you to combine commands lxc init ΠΈ lxc start in one operation.
Checking the status of the container:
lxc list -c ns46tb
+------+---------+------------------+------+-----------+--------------+
| NAME | STATE | IPV4 | IPV6 | TYPE | STORAGE POOL |
+------+---------+------------------+------+-----------+--------------+
| alp | RUNNING | 10.0.5.46 (eth0) | | CONTAINER | hddpool |
+------+---------+------------------+------+-----------+--------------+
In section profiles we can verify that this container uses two configuration profiles β default ΠΈ hddroot... In the section devices we can only discover one device since the network device was created at the profile level default. In order to see all the devices used by the container, you need to add a key --expanded:
If we try to set an IP address for a network device eth0 the team lxc config device set alp intended for the container configuration, then we will get an error saying that the device does not exist because the device eth0 which is used by the container belongs to the profile default:
lxc config device set alp eth0 ipv4.address 10.0.5.5
Error: The device doesn't exist
We can of course set a static IP address for eth0 devices in the profile, but it will be the same for all containers that will use this profile. Therefore, let's add a device dedicated to the container:
lxc config device add alp eth0 nic name=eth0 nictype=bridged parent=lxdbr0 ipv4.address=10.0.5.5
Then you need to restart the container:
lxc restart alp
If we now look at the container configuration, then we do not need to apply the option --expanded to see the network device eth0, since we created it at the container level and it cascaded over the same device from the profile default:
To execute commands in a container, directly, bypassing network connections, is the command lxc exec which executes commands in a container without starting a system shell. If you need to execute a command in a shell, using shell patterns such as variables, file redirects (pipe), etc., then you must explicitly start the shell and pass the command as a key, for example:
lxc exec alp -- /bin/sh -c "echo $HOME"
The escape character was used in the command for a special character $ to variable $HOME was not interpreted on the host machine, but was only interpreted inside the container.
It is also possible to start an interactive shell mode, and then end the session by executing hotkey CTRL+D:
In LXD, you can manage container resources using a special configuration set. A complete list of container configuration parameters can be found in documentation.
limit.cpu - binds a container to one or more CPU cores
limits.cpu.allowance - manages either the CFS scheduler quotas when the time limit has passed, or the generic CPU resource sharing mechanism when the percentage has passed
limits.cpu.priority - scheduler priority when multiple instances sharing a set of processors are assigned the same percentage of processors
In addition to restrictions such as limits.read, limits.write we can also limit the amount of disk space consumed by a container (only works with ZFS or BTRFS):
lxc config device set alp root size=2GB
After installation, in the parameter devices.root.size we can verify that the constraint is set:
lxc config show alp
...
devices:
root:
path: /
pool: hddpool
size: 2GB
type: disk
ephemeral: false
profiles:
- default
- hddroot
stateful: false
description: ""
To view disk quotas in use, we can get from the command lxc info:
lxc info alp
...
Resources:
Processes: 5
Disk usage:
root: 1.05GB
CPU usage:
CPU usage (in seconds): 1
Memory usage:
Memory (current): 5.46MB
Network usage:
eth0:
Bytes received: 802B
Bytes sent: 1.59kB
Packets received: 4
Packets sent: 14
lo:
Bytes received: 0B
Bytes sent: 0B
Packets received: 0
Packets sent: 0
Even though we have set the container root device limit to 2GB, system utilities such as df will not see this limitation. To do this, we will conduct a small test and find out how it works.
Let's create 2 new identical containers in the same storage pool (hddpool):
lxc exec alp1 -- ls -lh
total 1000M
-rw-r--r-- 1 root root 1000.0M Mar 27 10:16 file.img
If we look in the second container, check for the existence of a file in the same location, then this file will not be there, which is expected, since containers are created in their own Storage Volume in the same storage pool:
lxc exec alp2 -- ls -lh
total 0
But let's compare the values ββ\uXNUMXb\uXNUMXbthat it produces df on one and the other containers:
lxc exec alp1 -- df -hT
Filesystem Type Size Used Available Use% Mounted on
/dev/loop1 btrfs 9.3G 1016.4M 7.8G 11% /
...
lxc exec alp2 -- df -hT
Filesystem Type Size Used Available Use% Mounted on
/dev/loop1 btrfs 9.3G 1016.4M 7.8G 11% /
...
Π£ΡΡΡΠΎΠΉΡΡΠ²ΠΎ /dev/loop1 mounted as root partition is storage pool that these containers use, so they share its volume between two.
LXD has the ability to create snapshots and restore the state of the container from them.
To create a snapshot, run the following command:
lxc snapshot alp snapshot1
Team lxc snapshot there is no key list, therefore, to view the list of snapshots, you need to use the command that displays general information about the container:
lxc info alp
...
...
Snapshots:
snapshot1 (taken at 2020/04/08 18:18 UTC) (stateless)
You can restore a container from a snapshot with the command lxc restore specifying the container for which the restoration will be performed and the snapshot alias:
lxc restore alp snapshot1
The following command is used to delete a snapshot. Please note that the syntax of the command is not like all the others, here you need to specify a forward slash after the container name. If the slash is omitted, then the snapshot deletion command is interpreted as a container deletion command!
lxc delete alp/snapshot1
In the example above, we looked at the so-called stateless snapshots. In LXD, there is another type of snapshot - stateful, which saves the current state of all processes in the container. There are a number of interesting and useful features associated with stateful snapshots.