Core Features of LXD - Linux Container Systems

Core Features of LXD - Linux Container Systems

Lxd is the next generation system container manager, so it says source. It offers a user interface similar to virtual machines, but using Linux containers instead.

LXD core is a privileged daemon (a service running as root) that provides a REST API via a local unix socket, and also via the network if the appropriate configuration is set. Clients such as the command line tool provided with LXD send requests through this REST API. This means that whether you're accessing the local host or a remote host, everything works the same way.

In this article, we will not dwell on the concepts of LXD, we will not consider all the available features outlined in the documentation, including the recent implementation in the latest versions of LXD of supporting QEMU virtual machines in parallel with containers. Instead, we'll only learn the basics of container management - set up storage pools, networking, run a container, apply resource limits, and look at how to use snapshots so you can get a basic understanding of LXD and use containers on Linux.

For complete information, please refer to the official source:

Navigation

LXD installation ^

Installing LXD on Ubuntu distributions ^

In the Ubuntu 19.10 distribution, the package lxd has a broadcast on snap package:

apt search lxd

lxd/eoan 1:0.7 all
  Transitional package - lxd -> snap (lxd)

This means that two packages will be installed at once, one system package, and the other as a snap package. Installing two packages on a system can create a problem where the system package can become an orphan if the snap package is removed by the snap package manager.

Find a package lxd in a snap repository, you can use the following command:

snap find lxd

Name             Version        Summary
lxd              3.21           System container manager and API
lxd-demo-server  0+git.6d54658  Online software demo sessions using LXD
nova             ocata          OpenStack Compute Service (nova)
nova-hypervisor  ocata          OpenStack Compute Service - KVM Hypervisor (nova)
distrobuilder    1.0            Image builder for LXC and LXD
fabrica          0.1            Build snaps by simply pointing a web form to...
satellite        0.1.2          Advanced scalable Open source intelligence platform

By running the command list you can make sure the package lxd not installed yet:

snap list

Name  Version    Rev   Tracking  Publisher   Notes
core  16-2.43.3  8689  stable    canonicalβœ“  core

Despite the fact that LXD is a snap package, you need to install it through the system package lxd, which will create the corresponding group in the system, the necessary utilities in /usr/bin etc.

sudo apt update
sudo apt install lxd

Make sure the package is installed as a snap package:

snap list

Name  Version    Rev    Tracking  Publisher   Notes
core  16-2.43.3  8689   stable    canonicalβœ“  core
lxd   3.21       13474  stable/…  canonicalβœ“  -

Installing LXD on Arch Linux distributions ^

To install the LXD package in the system, you need to run the following commands, the first one updates the list of packages in the system available in the repository, the second one directly installs the package:

sudo pacman -Syyu && sudo pacman -S lxd

After installing the package, in order to manage LXD as a regular user, it must be added to the system group lxd:

sudo usermod -a -G lxd user1

Make sure the user user1 added to group lxd:

id -Gn user1

user1 adm dialout cdrom floppy sudo audio dip video plugdev netdev lxd

If the group lxd is not visible in the list, then you need to reactivate the user session. To do this, you need to log out and log in as the same user.

Activate in systemd loading the LXD service at system startup:

sudo systemctl enable lxd

We start the service:

sudo systemctl start lxd

Checking the service status:

sudo systemctl status lxd

Storage LXD (Storage) ^

Before starting the initialization, we need to understand how the storage in LXD is logically arranged.

Storage (Storage) composed from one or more storage pool which uses one of the supported file systems such as ZFS, BTRFS, LVM, or regular directories. Every storage pool divided into volumesStorage Volume) that contain images, containers, or data for other purposes.

  • Images are specially built distributions without a Linux kernel and available from external sources
  • Containers - these are deployed distributions from images, ready for operation
  • Snapshots - these are snapshots of the state of containers to which you can return

Core Features of LXD - Linux Container Systems

To manage storage in LXD, use the command lxc storage help on which you can get by specifying the key - lxc storage --help

The following command displays a list of all storage pool in LXD storage:

lxc storage list

+---------+-------------+--------+--------------------------------+---------+
|  NAME   | DESCRIPTION | DRIVER |             SOURCE             | USED BY |
+---------+-------------+--------+--------------------------------+---------+
| hddpool |             | btrfs  | /dev/loop1                     | 2       |
+---------+-------------+--------+--------------------------------+---------+
| ssdpool |             | btrfs  | /var/lib/lxd/disks/ssdpool.img | 4       |
+---------+-------------+--------+--------------------------------+---------+

To view a list of all Storage Volume in the selected storage pool serving team lxc storage volume list:

lxc storage volume list hddpool

+-------+----------------------------------+-------------+---------+
| TYPE  |          NAME                    | DESCRIPTION | USED BY |
+-------+----------------------------------+-------------+---------+
| image | ebd565585223487526ddb3607f515... |             | 1       |
+-------+----------------------------------+-------------+---------+

lxc storage volume list ssdpool

+-----------+----------------------------------+-------------+---------+
|   TYPE    |            NAME                  | DESCRIPTION | USED BY |
+-----------+----------------------------------+-------------+---------+
| container | alp3                             |             | 1       |
+-----------+----------------------------------+-------------+---------+
| container | jupyter                          |             | 1       |
+-----------+----------------------------------+-------------+---------+
| image     | ebd565585223487526ddb3607f515... |             | 1       |
+-----------+----------------------------------+-------------+---------+

Also, if for storage pool when creating, the BTRFS file system was selected, then get a list Storage Volume or subvolumes in the interpretation of BTRFS, you can use the toolkit of this file system:

sudo btrfs subvolume list -p /var/lib/lxd/storage-pools/hddpool

ID 257 gen 818 parent 5 top level 5 path images/ebd565585223487526ddb3607f5156e875c15a89e21b61ef004132196da6a0a3

sudo btrfs subvolume list -p /var/lib/lxd/storage-pools/ssdpool

ID 257 gen 1820 parent 5 top level 5 path images/ebd565585223487526ddb3607f5156e875c15a89e21b61ef004132196da6a0a3
ID 260 gen 1819 parent 5 top level 5 path containers/jupyter
ID 263 gen 1820 parent 5 top level 5 path containers/alp3

LXD initialization ^

Before creating and using containers, you must perform a general LXD initialization that creates and configures the network as well as storage. This can be done manually using standard client commands that are available in the list by calling the command lxc --help or using the initialization wizard lxd init answering a few questions.

Selecting a File System for the Storage Pool ^

During initialization, LXD asks several questions, among which will be determining the type of file system for the default storage pool. By default, the BTRFS file system is selected for it. It will be impossible to change to another FS after creation. To select a FS, it is proposed feature comparison table:

Feature
Directory
Btrfs
LVM
ZFS
CEPH

Optimized image storage
No.
Yes
Yes
Yes
Yes

Optimized instance creation
No.
Yes
Yes
Yes
Yes

Optimized snapshot creation
No.
Yes
Yes
Yes
Yes

Optimized image transfer
No.
Yes
No.
Yes
Yes

Optimized instance transfer
No.
Yes
No.
Yes
Yes

copy on write
No.
Yes
Yes
Yes
Yes

block-based
No.
No.
Yes
No.
Yes

instant cloning
No.
Yes
Yes
Yes
Yes

Storage driver usable inside a container
Yes
Yes
No.
No.
No.

Restore from older snapshots (not latest)
Yes
Yes
Yes
No.
Yes

Storage quotas
yes(*)
Yes
Yes
Yes
No.

Initializing the Network and Storage Pool Using the Wizard ^

The next command we'll look at is to set up the main components of LXD by answering simple questions using the initialization wizard.

Run command lxc init and enter the answers to the questions after the colon sign as shown in the example below, or change them according to your conditions:

lxd init

Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: ssdpool         
Name of the storage backend to use (lvm, btrfs, dir) [default=btrfs]: 
Create a new BTRFS pool? (yes/no) [default=yes]: 
Would you like to use an existing block device? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=15GB]: 10GB
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, β€œauto” or β€œnone”) [default=auto]: 10.0.5.1/24
Would you like LXD to NAT IPv4 traffic on your bridge? [default=yes]: 
What IPv6 address should be used? (CIDR subnet notation, β€œauto” or β€œnone”) [default=auto]: none
Would you like LXD to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] no
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

Create an additional Storage Pool ^

In the previous step we created storage pool which was named ssdpool and whose file is located on my system at /var/lib/lxd/disks/ssdpool.img. This file system address corresponds to the physical SSD in my PC.

The following actions, in order to increase understanding of the role played by storage pool in the repository, we will create a second storage pool which will be physically located on another type of disk, on the HDD. The problem is that LXD does not allow you to create storage pool out of address /var/lib/lxd/disks/ and even symlinks won't work, see developer's answer. We can get around this limitation when initializing / formatting storage pool by specifying the value as a block device instead of the path to the loopback file by specifying this in the key source.

So before creating storage pool you need to define a loopback file or an existing partition on your file system that it will use. To do this, we will create and use a file that will be limited to 10GB in size:

dd if=/dev/zero of=/mnt/work/lxd/hddpool.img bs=1MB count=10000

10000+0 records in
10000+0 records out
10000000000 bytes (10 GB, 9,3 GiB) copied, 38,4414 s, 260 MB/s

Connect the loopback file to a free loopback device:

sudo losetup --find --show /mnt/work/lxd/hddpool.img

/dev/loop1

thanks to the key --show executing the command returns the name of the device to which our loopback file is connected to on the screen. If necessary, we can display a list of all busy devices of this type to make sure that our actions are correct:

losetup -l

NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE                      DIO LOG-SEC
/dev/loop1         0      0         0  0 /mnt/work/lxd/hddpool.img        0     512
/dev/loop0         0      0         1  0 /var/lib/lxd/disks/ssdpool.img   0     512

From the list, you can find that in the device /dev/loop1 connected loopback file /mnt/work/lxd/hddpool.img, and in the device /dev/loop0 connected loopback file /var/lib/lxd/disks/ssdpool.img which corresponds to the default storage pool.

The following command creates a new storage pool in LXD based on the loopback file we just prepared. LXD will format the loopback file /mnt/work/lxd/hddpool.img in the device /dev/loop1 under the BTRFS file system:

lxc storage create hddpool btrfs size=10GB source=/dev/loop1

Let's get a list of all storage pool to the screen:

lxc storage list

+---------+-------------+--------+--------------------------------+---------+
|  NAME   | DESCRIPTION | DRIVER |             SOURCE             | USED BY |
+---------+-------------+--------+--------------------------------+---------+
| hddpool |             | btrfs  | /dev/loop1                     | 0       |
+---------+-------------+--------+--------------------------------+---------+
| ssdpool |             | btrfs  | /var/lib/lxd/disks/ssdpool.img | 0       |
+---------+-------------+--------+--------------------------------+---------+

Increasing the size of the Storage Pool ^

After creation storage pool, if necessary, it can be expanded. For storage pool based on the BTRFS file system, run the following commands:

sudo truncate -s +5G /mnt/work/lxd/hddpool.img
sudo losetup -c /dev/loop1
sudo btrfs filesystem resize max /var/lib/lxd/storage-pools/hddpool

Auto-insert loopback file into slot of loopback device ^

We have one small problem, when rebooting the host system, the file /mnt/work/lxd/hddpool.img "flies out" of the device /dev/loop1 and the LXD service will crash on boot because it won't see it in that device. To solve this problem, you need to create a system service that will insert this file into the device /dev/loop1 when the host system boots.

Let's create unit type file service Π² /etc/systemd/system/ for the SystemD init system:

cat << EOF | sudo tee -a /etc/systemd/system/lxd-hddpool.service
[Unit]
Description=Losetup LXD Storage Pool (hddpool)
After=local-fs.target

[Service]
Type=oneshot
ExecStart=/sbin/losetup /dev/loop1 /mnt/work/lxd/hddpool.img
RemainAfterExit=true

[Install]
WantedBy=local-fs.target
EOF

Activate the service:

sudo systemctl enable lxd-hddpool

Created symlink /etc/systemd/system/local-fs.target.wants/lxd-hddpool.service β†’ /etc/systemd/system/lxd-hddpool.service.

After restarting the host system, check the status of the service:

systemctl status lxd-hddpool.service 

● lxd-hddpool.service - Losetup LXD Storage Pool (hddpool)
     Loaded: loaded (/etc/systemd/system/lxd-hddpool.service; enabled; vendor preset: disabled)
     Active: active (exited) since Wed 2020-04-08 03:43:53 MSK; 1min 37s ago
    Process: 711 ExecStart=/sbin/losetup /dev/loop1 /mnt/work/lxd/hddpool.img (code=exited, status=0/SUCCESS)
   Main PID: 711 (code=exited, status=0/SUCCESS)

Π°ΠΏΡ€ 08 03:43:52 manjaro systemd[1]: Starting Losetup LXD Storage Pool (hddpool)...
Π°ΠΏΡ€ 08 03:43:53 manjaro systemd[1]: Finished Losetup LXD Storage Pool (hddpool).

From the output, we can verify that the state of the service is active, despite the fact that the execution of our script from one command ended, this allowed us to do the option RemainAfterExit=true.

Safety. Container Privileges ^

Since all container processes actually run in isolation on the host system using its kernel, to further protect the access of container processes to the host system, LXD offers process privileges, where:

  • Privileged Containers are containers in which processes with UID and GID correspond to the same owner as on the host system. For example, a process running in a container with UID 0 has all the same permissions as a host system process with UID 0. In other words, the root user in the container has all rights not only in the container, but also on the host system if he can go outside the container's isolated namespace.

  • Unprivileged containers are containers in which processes belong to the owner UID and GID with a number from 0 to 65535, but for the host system, the owner is masked using the added SubUID and SubGID bits, respectively. For example, a user with UID=0 in a container will be seen on the host system as SubUID + UID. This protects the host system because if any process in the container is able to exit its isolated namespace, it can only interact with the host system as a process with an unknown, very high UID/GID.

By default, newly created containers have an unprivileged status, so we must define a SubUID and a SubGID.

Let's create two configuration files in which we set the mask for SubUID and SubGID, respectively:

sudo touch /etc{/subuid,/subgid}
sudo usermod --add-subuids 1000000-1065535 root 
sudo usermod --add-subgids 1000000-1065535 root

To apply the changes, the LXD service must be restarted:

sudo systemctl restart lxd

Create a virtual network switch ^

Since we previously initialized the network using the initialization wizard lxd init and created a network device lxdbr0, then in this section we will just get acquainted with the network in LXD and how to create a virtual switch (network bridge, bridge) using the client command.

The following diagram demonstrates how a switch (network bridge, bridge) connects a host and containers to a network:

Core Features of LXD - Linux Container Systems

Containers can communicate over the network with other containers or the host on which these containers are served. To do this, you need to link the virtual network cards of the containers with the virtual switch. Let's create the switch first, and the container's network interfaces will be linked in later chapters after the container itself has been created.

The following command creates a switch with a subnet 10.0.5.0/24 and IPv4 address 10.0.5.1/24and also includes ipv4.nat so that containers can access the internet through the host using the NAT service:

lxc network create lxdbr0 ipv4.address=10.0.5.1/24 ipv4.nat=true ipv6.address=none

Check the list of network devices available LXD:

lxc network list

+--------+----------+---------+-------------+---------+
|  NAME  |   TYPE   | MANAGED | DESCRIPTION | USED BY |
+--------+----------+---------+-------------+---------+
| eno1   | physical | NO      |             | 0       |
+--------+----------+---------+-------------+---------+
| lxdbr0 | bridge   | YES     |             | 0       |
+--------+----------+---------+-------------+---------+

Also, you can make sure that a network device has been created using the standard Linux distribution tool - ip link or ip addr:

ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether bc:ee:7b:5a:6b:44 brd ff:ff:ff:ff:ff:ff
    altname enp0s25
    inet6 fe80::9571:11f3:6e0c:c07b/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c2:38:90:df:cb:59 brd ff:ff:ff:ff:ff:ff
    inet 10.0.5.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::c038:90ff:fedf:cb59/64 scope link 
       valid_lft forever preferred_lft forever
5: veth3ddab174@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether ca:c3:5c:1d:22:26 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Configuration Profile ^

Each container in LXD has its own configuration and can extend it with globally declared configurations called configuration profiles. Applying configuration profiles to a container has a cascading model, the following example demonstrates this:

Core Features of LXD - Linux Container Systems

In this example, three profiles are created in the LXD system: default, hddpool ΠΈ hostfs. All three profiles are applied to a container that has a local configuration (gray area). Profile default has a device root whose parameter pool is ssdpool, but thanks to the cascading configuration application model, we can apply a profile to the container hddpool whose parameter pool will override the same parameter from the profile default and the container will get the device configuration root with parameter pool equal hddpool, and the profile hostfs simply adds a new device to the container.

To see the list of available configuration profiles, use the following command:

lxc profile list

+---------+---------+
|  NAME   | USED BY |
+---------+---------+
| default | 1       |
+---------+---------+
| hddroot | 0       |
+---------+---------+
| ssdroot | 1       |
+---------+---------+

A complete list of available commands for working with a profile can be obtained by adding the key --help:

lxc profile --help

Description:
  Manage profiles

Usage:
  lxc profile [command]

Available Commands:
  add         Add profiles to instances
  assign      Assign sets of profiles to instances
  copy        Copy profiles
  create      Create profiles
  delete      Delete profiles
  device      Manage instance devices
  edit        Edit profile configurations as YAML
  get         Get values for profile configuration keys
  list        List profiles
  remove      Remove profiles from instances
  rename      Rename profiles
  set         Set profile configuration keys
  show        Show profile configurations
  unset       Unset profile configuration keys

Editing a profile ^

Default configuration profile default does not have a network card configuration for the container and all newly created containers do not have a network, for them it is necessary to create local (dedicated) network devices with a separate command, but we can create a global network device in the configuration profile that will be shared between all containers using this profile. Thus, immediately after the command to create a new container, they will have a network with access to the network. However, there are no restrictions, we can always create a local network device later if necessary.

The following command will add a device to the configuration profile eth0 type nic connected to the network lxdbr0:

lxc profile device add default eth0 nic network=lxdbr0 name=eth0

It is important to note that since we actually added the device to the configuration profile, if we specified a static IP address in the device, then all containers that will use this profile will share the same IP address. If there is a need to create a container with a static IP address dedicated to the container, then you should create a network device configuration at the container level (local configuration) with the IP address parameter, and not at the profile level.

Let's check the profile:

lxc profile show default

config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: ssdpool
    type: disk
name: default
used_by: []

In this profile, we can see that two devices (devices) will be created for all newly created containers:

  • eth0 β€” device type nic connected to a switch (network bridge) lxdbr0
  • root β€” device type disk which uses the storage pool ssdpool

Creating new profiles ^

To use previously created storage pool containers, create a configuration profile ssdroot in which we add a device of type disk with mount point / (root) using the previously created storage pool β€” ssdpool:

lxc profile create ssdroot
lxc profile device add ssdroot root disk path=/ pool=ssdpool

Similarly, we create a device of type disk, but in this case using storage pool β€” hddpool:

lxc profile create hddroot
lxc profile device add hddroot root disk path=/ pool=hddpool

Checking configuration profiles:

lxc profile show ssdroot

config: {}
description: ""
devices:
  root:
    path: /
    pool: ssdpool
    type: disk
name: ssdroot
used_by: []

lxc profile show hddroot

config: {}
description: ""
devices:
  root:
    path: /
    pool: hddpool
    type: disk
name: hddroot
used_by: []

Image repository ^

Containers are created from images that are specially built distributions that do not have a Linux kernel. Therefore, before running a container, it must be deployed from this image. The source of images is a local repository into which images are loaded from external repositories.

Remote image repositories ^

By default, LXD is configured to receive images from three remote sources:

  • ubuntu: (for stable Ubuntu images)
  • ubuntu-daily: (for daily Ubuntu images)
  • pictures: (for a bunch of other distros)

lxc remote list

+-----------------+------------------------------------------+--------+--------+
|      NAME       |                   URL                    | PUBLIC | STATIC |
+-----------------+------------------------------------------+--------+--------+
| images          | https://images.linuxcontainers.org       | YES    | NO     |
+-----------------+------------------------------------------+--------+--------+
| local (default) | unix://                                  | NO     | YES    |
+-----------------+------------------------------------------+--------+--------+
| ubuntu          | https://cloud-images.ubuntu.com/releases | YES    | YES    |
+-----------------+------------------------------------------+--------+--------+
| ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | YES    | YES    |
+-----------------+------------------------------------------+--------+--------+

For example, the repository ubuntu: has the following images:

lxc image -c dasut list ubuntu: | head -n 11

+----------------------------------------------+--------------+----------+------------+
|                   DESCRIPTION                | ARCHITECTURE |   SIZE   |   TYPE     |
+----------------------------------------------+--------------+----------+------------+
| ubuntu 12.04 LTS amd64 (release) (20150728)  | x86_64       | 153.72MB | CONTAINER  |
+----------------------------------------------+--------------+----------+------------+
| ubuntu 12.04 LTS amd64 (release) (20150819)  | x86_64       | 152.91MB | CONTAINER  |
+----------------------------------------------+--------------+----------+------------+
| ubuntu 12.04 LTS amd64 (release) (20150906)  | x86_64       | 154.69MB | CONTAINER  |
+----------------------------------------------+--------------+----------+------------+
| ubuntu 12.04 LTS amd64 (release) (20150930)  | x86_64       | 153.86MB | CONTAINER  |
+----------------------------------------------+--------------+----------+------------+

To display a limited number of columns, we used the option -c with parameters dasut, and also limited the length of the list with the command head.

Filtering is available to display a list of images. The following command will list all available distribution architectures Alpine Linux:

lxc image -c ldast list images:alpine/3.11

+------------------------------+--------------------------------------+--------------+
|            ALIAS             |             DESCRIPTION              | ARCHITECTURE |
+------------------------------+--------------------------------------+--------------+
| alpine/3.11 (3 more)         | Alpine 3.11 amd64 (20200220_13:00)   | x86_64       |
+------------------------------+--------------------------------------+--------------+
| alpine/3.11/arm64 (1 more)   | Alpine 3.11 arm64 (20200220_13:00)   | aarch64      |
+------------------------------+--------------------------------------+--------------+
| alpine/3.11/armhf (1 more)   | Alpine 3.11 armhf (20200220_13:00)   | armv7l       |
+------------------------------+--------------------------------------+--------------+
| alpine/3.11/i386 (1 more)    | Alpine 3.11 i386 (20200220_13:01)    | i686         |
+------------------------------+--------------------------------------+--------------+
| alpine/3.11/ppc64el (1 more) | Alpine 3.11 ppc64el (20200220_13:00) | ppc64le      |
+------------------------------+--------------------------------------+--------------+
| alpine/3.11/s390x (1 more)   | Alpine 3.11 s390x (20200220_13:00)   | s390x        |
+------------------------------+--------------------------------------+--------------+

Local image repository ^

To start using the container, you need to add an image from the global repository to the local one. local:. Now the local repository is empty, we will be convinced of this by the command lxc image list. If the method list do not specify a repository, then the local repository will be used by default βˆ’ local:

lxc image list local:

+-------+-------------+--------+-------------+--------------+------+------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE |
+-------+-------------+--------+-------------+--------------+------+------+

Image management in the repository is performed by the following methods:

Team
Description

lxc image alias
Manage image aliases

lxc image copy
Copy images between servers

lxc image delete
Delete images

lxc image edit
Edit image properties

lxc image export
Export and download images

lxc image import
Import images into the image store

lxc image info
Show useful information about images

lxc image list
List images

lxc image refresh
Refresh images

lxc image Show
Show image properties

Copy the image to the local repository from the global images::

lxc image copy images:alpine/3.11/amd64 local: --alias=alpine3

Image copied successfully!

List all images currently available in the local repository local::

lxc image -c lfdatsu list local:

+---------+--------------+------------------------------------+--------------+
|  ALIAS  | FINGERPRINT  |            DESCRIPTION             | ARCHITECTURE |
+---------+--------------+------------------------------------+--------------+
| alpine3 | 73a3093d4a5c | Alpine 3.11 amd64 (20200220_13:00) | x86_64       |
+---------+--------------+------------------------------------+--------------+

LXD Configuration ^

In addition to the interactive mode, LXD also supports a non-interactive configuration installation mode, this is when the configuration is specified as a YAML file, a special format that allows you to install the entire configuration at one time, bypassing the execution of many interactive commands that were discussed above in this article, including network configuration, creating configuration profiles, etc. Here we will not consider this area, you can familiarize yourself with this. in documentation.

Next interactive command lxc config which we will look at, allows you to set the configuration. For example, in order to prevent images uploaded to the local repository from being automatically updated from the global repositories, we can enable this behavior with the following command:

lxc config set images.auto_update_cached=false

Create and manage a container ^

The command to create a container is lxc init which values ​​are passed Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΠΉ:ΠΎΠ±Ρ€Π°Π· and then the desired ID for the container. Repository can be specified as local local: as well as any global. If the repository is not specified, then by default, the local repository is used to search for the image. If the image is specified from the global repository, then the image will first be uploaded to the local repository and then used to create the container.

Let's run the following command to create our first container:

lxc init alpine3 alp --storage=hddpool --profile=default --profile=hddroot

Let's analyze in order the command keys that we use here:

  • alpine3 - An alias (alias) is specified for the image that was previously uploaded to the local repository. If the alias was not created for this image, then you can always refer to the image by its Fingerprint which is displayed in the table.
  • alp β€” ID for the container is set
  • --storage - This key indicates in which storage pool container will be created
  • --profile - These keys cascade the configuration from previously created configuration profiles to the container

We start the container, which starts running the distribution's init system:

lxc start alp

Also, you can use the command lxc launch which allows you to combine commands lxc init ΠΈ lxc start in one operation.

Checking the status of the container:

lxc list -c ns46tb
+------+---------+------------------+------+-----------+--------------+
| NAME |  STATE  |       IPV4       | IPV6 |   TYPE    | STORAGE POOL |
+------+---------+------------------+------+-----------+--------------+
| alp  | RUNNING | 10.0.5.46 (eth0) |      | CONTAINER | hddpool      |
+------+---------+------------------+------+-----------+--------------+

Checking the container configuration:

lxc config show alp

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Alpine 3.11 amd64 (20200326_13:39)
  image.os: Alpine
  image.release: "3.11"
  image.serial: "20200326_13:39"
  image.type: squashfs
  volatile.base_image: ebd565585223487526ddb3607f5156e875c15a89e21b61ef004132196da6a0a3
  volatile.eth0.host_name: vethb1fe71d8
  volatile.eth0.hwaddr: 00:16:3e:5f:73:3e
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
devices:
  root:
    path: /
    pool: hddpool
    type: disk
ephemeral: false
profiles:
- default
- hddroot
stateful: false
description: ""

In section profiles we can verify that this container uses two configuration profiles βˆ’ default ΠΈ hddroot... In the section devices we can only discover one device since the network device was created at the profile level default. In order to see all the devices used by the container, you need to add a key --expanded:

lxc config show alp --expanded

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Alpine 3.11 amd64 (20200326_13:39)
  image.os: Alpine
  image.release: "3.11"
  image.serial: "20200326_13:39"
  image.type: squashfs
  volatile.base_image: ebd565585223487526ddb3607f5156e875c15a89e21b61ef004132196da6a0a3
  volatile.eth0.host_name: vethb1fe71d8
  volatile.eth0.hwaddr: 00:16:3e:5f:73:3e
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: hddpool
    type: disk
ephemeral: false
profiles:
- default
- hddroot
stateful: false
description: ""

Setting a static IP address ^

If we try to set an IP address for a network device eth0 the team lxc config device set alp intended for the container configuration, then we will get an error saying that the device does not exist because the device eth0 which is used by the container belongs to the profile default:

lxc config device set alp eth0 ipv4.address 10.0.5.5

Error: The device doesn't exist

We can of course set a static IP address for eth0 devices in the profile, but it will be the same for all containers that will use this profile. Therefore, let's add a device dedicated to the container:

lxc config device add alp eth0 nic name=eth0 nictype=bridged parent=lxdbr0 ipv4.address=10.0.5.5

Then you need to restart the container:

lxc restart alp

If we now look at the container configuration, then we do not need to apply the option --expanded to see the network device eth0, since we created it at the container level and it cascaded over the same device from the profile default:

lxc config show alp

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Alpine 3.11 amd64 (20200326_13:39)
  image.os: Alpine
  image.release: "3.11"
  image.serial: "20200326_13:39"
  image.type: squashfs
  volatile.base_image: ebd565585223487526ddb3607f5156e875c15a89e21b61ef004132196da6a0a3
  volatile.eth0.host_name: veth2a1dc59d
  volatile.eth0.hwaddr: 00:16:3e:0e:e2:71
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
devices:
  eth0:
    ipv4.address: 10.0.5.5
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: hddpool
    type: disk
ephemeral: false
profiles:
- default
- hddroot
stateful: false
description: ""

Removing a container ^

The command to remove a container is lxc delete, but before deleting the container, it must be stopped with the command lxc stop:

lxc stop alp

lxc list

+------+---------+-------------------+------+-----------+-----------+
| NAME |  STATE  |       IPV4        | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+-------------------+------+-----------+-----------+
| alp  | STOPPED | 10.0.5.10 (eth0)  |      | CONTAINER | 0         |
+------+---------+-------------------+------+-----------+-----------+

After we made sure that the state of the container became STOPPED, it can be removed from storage pool:

lxc delete alp

Container Access ^

To execute commands in a container, directly, bypassing network connections, is the command lxc exec which executes commands in a container without starting a system shell. If you need to execute a command in a shell, using shell patterns such as variables, file redirects (pipe), etc., then you must explicitly start the shell and pass the command as a key, for example:

lxc exec alp -- /bin/sh -c "echo $HOME"

The escape character was used in the command for a special character $ to variable $HOME was not interpreted on the host machine, but was only interpreted inside the container.

It is also possible to start an interactive shell mode, and then end the session by executing hotkey CTRL+D:

lxc exec alp -- /bin/sh

Container resource management ^

In LXD, you can manage container resources using a special configuration set. A complete list of container configuration parameters can be found in documentation.

RAM Resource Limit (RAM) ^

Parameter limits.memory limits the amount of RAM available to the container. The value is a number and one of available suffixes.

Let's set a container limit on the amount of RAM equal to 256 MB:

lxc config set alp limits.memory 256MB

Also, there are other options for limiting memory:

  • limits.memory.enforce
  • limits.memory.hugepages
  • limits.memory.swap
  • limits.memory.swap.priority

Team lxc config show allows you to display the entire configuration of the container, including the applied resource limit that was set:

lxc config show alp

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Alpine 3.11 amd64 (20200220_13:00)
  image.os: Alpine
  image.release: "3.11"
  image.serial: "20200220_13:00"
  image.type: squashfs
  limits.memory: 256MB
  volatile.base_image: 73a3093d4a5ce0148fd84b95369b3fbecd19a537ddfd2e2d20caa2eef0e8fd60
  volatile.eth0.host_name: veth75b6df07
  volatile.eth0.hwaddr: 00:16:3e:a1:e7:46
  volatile.idmap.base: "0"
  volatile.idmap.current: '[]'
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
devices: {}
ephemeral: false
profiles:
- default
stateful: false
description: ""

CPU Resource Limit (CPU) ^

To limit CPU resources, there are several types of restrictions:

  • limit.cpu - binds a container to one or more CPU cores
  • limits.cpu.allowance - manages either the CFS scheduler quotas when the time limit has passed, or the generic CPU resource sharing mechanism when the percentage has passed
  • limits.cpu.priority - scheduler priority when multiple instances sharing a set of processors are assigned the same percentage of processors

lxc config set alp limits.cpu.allowance 40%

lxc config show alp

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Alpine 3.11 amd64 (20200220_13:00)
  image.os: Alpine
  image.release: "3.11"
  image.serial: "20200220_13:00"
  image.type: squashfs
  limits.cpu.allowance: 40%
  limits.memory: 256MB
  volatile.base_image: 73a3093d4a5ce0148fd84b95369b3fbecd19a537ddfd2e2d20caa2eef0e8fd60
  volatile.eth0.host_name: veth75b6df07
  volatile.eth0.hwaddr: 00:16:3e:a1:e7:46
  volatile.idmap.base: "0"
  volatile.idmap.current: '[]'
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
devices: {}
ephemeral: false
profiles:
- default
stateful: false
description: ""

Disk space limit ^

In addition to restrictions such as limits.read, limits.write we can also limit the amount of disk space consumed by a container (only works with ZFS or BTRFS):

lxc config device set alp root size=2GB

After installation, in the parameter devices.root.size we can verify that the constraint is set:

lxc config show alp
...
devices:
  root:
    path: /
    pool: hddpool
    size: 2GB
    type: disk
ephemeral: false
profiles:
- default
- hddroot
stateful: false
description: ""

To view disk quotas in use, we can get from the command lxc info:

lxc info alp
...
Resources:
  Processes: 5
  Disk usage:
    root: 1.05GB
  CPU usage:
    CPU usage (in seconds): 1
  Memory usage:
    Memory (current): 5.46MB
  Network usage:
    eth0:
      Bytes received: 802B
      Bytes sent: 1.59kB
      Packets received: 4
      Packets sent: 14
    lo:
      Bytes received: 0B
      Bytes sent: 0B
      Packets received: 0
      Packets sent: 0

Even though we have set the container root device limit to 2GB, system utilities such as df will not see this limitation. To do this, we will conduct a small test and find out how it works.

Let's create 2 new identical containers in the same storage pool (hddpool):

lxc init alpine3 alp1 --storage=hddpool --profile=default --profile=hddroot
lxc init alpine3 alp2 --storage=hddpool --profile=default --profile=hddroot

lxc list
+------+---------+------------------+------+-----------+-----------+
| NAME |  STATE  |       IPV4       | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+------------------+------+-----------+-----------+
| alp1 | RUNNING | 10.0.5.46 (eth0) |      | CONTAINER | 0         |
+------+---------+------------------+------+-----------+-----------+
| alp2 | RUNNING | 10.0.5.30 (eth0) |      | CONTAINER | 0         |
+------+---------+------------------+------+-----------+-----------+

Let's create a 1GB file in one of the containers:

lxc exec alp1 -- dd if=/dev/urandom of=file.img bs=1M count=1000

Make sure the file is created:

lxc exec alp1 -- ls -lh
total 1000M  
-rw-r--r--    1 root     root     1000.0M Mar 27 10:16 file.img

If we look in the second container, check for the existence of a file in the same location, then this file will not be there, which is expected, since containers are created in their own Storage Volume in the same storage pool:

lxc exec alp2 -- ls -lh
total 0

But let's compare the values ​​\uXNUMXb\uXNUMXbthat it produces df on one and the other containers:

lxc exec alp1 -- df -hT
Filesystem           Type            Size      Used Available Use% Mounted on
/dev/loop1           btrfs           9.3G   1016.4M      7.8G  11% /
...

lxc exec alp2 -- df -hT
Filesystem           Type            Size      Used Available Use% Mounted on
/dev/loop1           btrfs           9.3G   1016.4M      7.8G  11% /
...

Устройство /dev/loop1 mounted as root partition is storage pool that these containers use, so they share its volume between two.

resource consumption statistics ^

You can view resource consumption statistics for a container using the command:

lxc info alp

Name: alp
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/04/08 18:05 UTC
Status: Running
Type: container
Profiles: default, hddroot
Pid: 19219
Ips:
  eth0: inet    10.0.5.5        veth2a1dc59d
  eth0: inet6   fe80::216:3eff:fe0e:e271        veth2a1dc59d
  lo:   inet    127.0.0.1
  lo:   inet6   ::1
Resources:
  Processes: 5
  Disk usage:
    root: 495.62kB
  CPU usage:
    CPU usage (in seconds): 1
  Memory usage:
    Memory (current): 4.79MB
  Network usage:
    eth0:
      Bytes received: 730B
      Bytes sent: 1.59kB
      Packets received: 3
      Packets sent: 14
    lo:
      Bytes received: 0B
      Bytes sent: 0B
      Packets received: 0
      Packets sent: 0

Working with snapshots ^

LXD has the ability to create snapshots and restore the state of the container from them.

To create a snapshot, run the following command:

lxc snapshot alp snapshot1

Team lxc snapshot there is no key list, therefore, to view the list of snapshots, you need to use the command that displays general information about the container:

lxc info alp
...
...
Snapshots:
  snapshot1 (taken at 2020/04/08 18:18 UTC) (stateless)

You can restore a container from a snapshot with the command lxc restore specifying the container for which the restoration will be performed and the snapshot alias:

lxc restore alp snapshot1

The following command is used to delete a snapshot. Please note that the syntax of the command is not like all the others, here you need to specify a forward slash after the container name. If the slash is omitted, then the snapshot deletion command is interpreted as a container deletion command!

lxc delete alp/snapshot1

In the example above, we looked at the so-called stateless snapshots. In LXD, there is another type of snapshot - stateful, which saves the current state of all processes in the container. There are a number of interesting and useful features associated with stateful snapshots.

What else? ^

  • A module is available for Python developers PyLXD which provides an API to LXD

UPDATE 10.04.2020/15/00 XNUMX:XNUMX: Added navigation

Source: habr.com

Add a comment