oVirt in 2 hours. Part 2. Installing the manager and hosts

This article is the next in the oVirt series, the beginning here.

Articles

  1. Introduction
  2. Installing the manager (ovirt-engine) and hypervisors (hosts) - We are here
  3. Additional settings

So, let's consider the issues of initial installation of the ovirt-engine and ovirt-host components.

You can always see the installation process in more detail in documentation.

Content

  1. Installing ovirt-engine
  2. Installing ovirt-host
  3. Adding a node to oVirtN
  4. Network interface setup
  5. FC setting
  6. FCoE setup
  7. Storage of ISO images
  8. First VM

Installing ovirt-engine

Engine minimum requirements are 2 cores/4 GiB RAM/25 GiB storage. Recommended - from 4 cores / 16 GiB of RAM / 50 GiB of storage. We use the Standalone Manager option when the engine runs on a dedicated physical or virtual machine outside of a managed cluster. For our installation, we will take a virtual machine, for example, on a stand-alone ESXi*. It is convenient to use deployment automation tools or clone from a previously prepared template or install kickstart.

*Note: This is a bad idea for a production system, as the manager works without a reserve and becomes a bottleneck. In this case, it is better to consider the Self-hosted Engine option.

If necessary, the procedure for converting Standalone to Self Hosted is described in detail in documentation. In particular, the host needs to be given a reinstall command with Hosted Engine enabled.

On the VM, install CentOS 7 in the minimum configuration, then update and reboot the system:

$ sudo yum update -y && sudo reboot

For a virtual machine, it is useful to install a guest agent:

$ sudo yum install open-vm-tools

for VMware ESXi hosts, or for oVirt:

$ sudo yum install ovirt-guest-agent

We connect the repository and install the manager:

$ sudo yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm
$ sudo yum install ovirt-engine

Basic setup:

$ sudo engine-setup

In most cases, the default settings are sufficient, to automatically use them, you can run the configuration with the key:

$ sudo engine-setup --accept-defaults

Now we can connect to our new engine at ovirt.lab.example.com. It's still empty, so let's move on to installing hypervisors.

Installing ovirt-host

On the physical host, install CentOS 7 in the minimum configuration, then connect the repository, update and reboot the system:

$ sudo yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm
$ sudo yum update -y && sudo reboot

Note: It is convenient to use deployment automation tools or a kickstart installation for installation.

Kickstart file example
Attention! Existing sections are deleted automatically! Be careful!

# System authorization information
auth --enableshadow --passalgo=sha512
# Use CDROM installation media
cdrom
# Use graphical install
graphical
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us','ru' --switch='grp:alt_shift_toggle'
# System language
lang ru_RU.UTF-8

# Network information
network  --bootproto=dhcp --device=ens192 --ipv6=auto --activate
network  --hostname=kvm01.lab.example.com

# Root password 'monteV1DE0'
rootpw --iscrypted $6$6oPcf0GW9VdmJe5w$6WBucrUPRdCAP.aBVnUfvaEu9ozkXq9M1TXiwOm41Y58DEerG8b3Ulme2YtxAgNHr6DGIJ02eFgVuEmYsOo7./
# User password 'metroP0!is'
user --name=mgmt --groups=wheel --iscrypted --password=$6$883g2lyXdkDLbKYR$B3yWx1aQZmYYi.aO10W2Bvw0Jpkl1upzgjhZr6lmITTrGaPupa5iC3kZAOvwDonZ/6ogNJe/59GN5U8Okp.qx.
# System services
services --enabled="chronyd"
# System timezone
timezone Europe/Moscow --isUtc
# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sda
# Partition clearing information
clearpart --all
# Disk partitioning information
part /boot --fstype xfs --size=1024 --ondisk=sda  --label=boot
part pv.01 --size=45056 --grow
volgroup HostVG pv.01 --reserved-percent=20
logvol swap --vgname=HostVG --name=lv_swap --fstype=swap --recommended
logvol none --vgname=HostVG --name=HostPool --thinpool --size=40960 --grow
logvol / --vgname=HostVG --name=lv_root --thin --fstype=ext4 --label="root" --poolname=HostPool --fsoptions="defaults,discard" --size=6144 --grow
logvol /var --vgname=HostVG --name=lv_var --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=16536
logvol /var/crash --vgname=HostVG --name=lv_var_crash --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=10240
logvol /var/log --vgname=HostVG --name=lv_var_log --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=8192
logvol /var/log/audit --vgname=HostVG --name=lv_var_audit --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=2048
logvol /home --vgname=HostVG --name=lv_home --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=1024
logvol /tmp --vgname=HostVG --name=lv_tmp --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=1024

%packages
@^minimal
@core
chrony
kexec-tools

%end

%addon com_redhat_kdump --enable --reserve-mb='auto'

%end

%anaconda
pwpolicy root --minlen=6 --minquality=1 --notstrict --nochanges --notempty
pwpolicy user --minlen=6 --minquality=1 --notstrict --nochanges --emptyok
pwpolicy luks --minlen=6 --minquality=1 --notstrict --nochanges --notempty
%end
# Reboot when the install is finished.
reboot --eject

Save this file, for example, to ftp.example.com/pub/labkvm.cfg. To use the script when starting the OS installation, select the 'Install CentOS 7' item, enable the parameter editing mode (Tab key) and at the end (with a space, without quotes) add

' inst.ks=ftp://ftp.example.com/pub/labkvm.cfg'

.
The install script removes existing partitions on /dev/sda, creates new ones on developer recommendations (it is convenient to view them after installation with the lsblk command). The host name is set as kvm01.lab.example.com (after installation, it can be changed with the hostnamectl set-hostname kvm03.lab.example.com command), IP address acquisition is automatic, the time zone is Moscow, Russian language support has been added.

Root password: monteV1DE0, mgmt user password: metroP0!is.
Attention! Existing sections are deleted automatically! Be careful!

Repeat (or execute in parallel) on all hosts. From turning on the “empty” server to the ready state, taking into account 2 long downloads, it takes about 20 minutes.

Adding a Node to oVirt

It is done very simply:

Compute → Hosts → New →…

The fields required in the wizard are Name (display name, e.g. kvm03), Hostname (FQDN, e.g. kvm03.lab.example.com) and the Authentication section — root user (invariant) - password or SSH Public Key.

After pressing the Ok You will receive a message "You haven't configured Power Management for this Host. Are you sure you want to continue?”. This is normal - we will look at power management later, after the host has successfully connected. However, if the machines on which the hosts are installed do not support management (IPMI, iLO, DRAC, etc.), I recommend disabling it: Compute → Clusters → Default → Edit → Fencing Ploicy → Enable fencing, uncheck it.

If the oVirt repository was not connected to the host, the installation will fail, but that's okay - you need to add it, then click Install -> Reinstall.

Host connection takes no more than 5-10 minutes.

Network interface setup

Since we are building a fault-tolerant system, the network connection must also provide a redundant connection, which is done on the tab Compute → Hosts → HOST → Network Interfaces - Setup Host Networks.

Depending on the capabilities of your network equipment and approaches to architecture, options are possible. It is best to connect to a stack of top-of-rack switches so that if one fails, network availability is not interrupted. Consider the example of an aggregated LACP channel. To set up an aggregated channel, "grab" the 2nd unused adapter with the mouse and "take it" to the 1st. A window will open Create New Bond, where LACP (Mode 4, Dynamic link aggregation, 802.3ad) is selected by default. On the switch side, the usual LACP group configuration is performed. If it is not possible to build a stack of switches, you can use the Active-Backup mode (Mode 1). We will consider VLAN settings in the next article, and in more detail with recommendations for setting up a network in the document Planning and Prerequisites Guide.

FC setting

Fiber Channel (FC) is supported out of the box and is easy to use. We will not set up a storage network, including setting up storage systems and zoning fabric switches as part of the oVirt setup.

FCoE setup

FCoE, in my opinion, has not become widespread in storage networks, but is often used on servers as a "last mile", for example, in HPE Virtual Connect.

Setting up FCoE requires additional simple steps.

Setup FCoE Engine

Article on the Red Hat website B.3. How to Set Up Red Hat Virtualization Manager to Use FCoE
On the manager
, add the key to the manager with the following command and restart it:


$ sudo engine-config -s UserDefinedNetworkCustomProperties='fcoe=^((enable|dcb|auto_vlan)=(yes|no),?)*$'
$ sudo systemctl restart ovirt-engine.service

Setup Node FCoE

On oVirt-Hosts you need to install

$ sudo yum install vdsm-hook-fcoe

Here is the usual FCoE setup, Red Hat article: 25.5. Configuring a Fiber Channel over Ethernet Interface.

For Broadcom CNA, additionally look User Guide FCoE Configuration for Broadcom-Based Adapters.

Make sure the packages are installed (already go to minimal):

$ sudo yum install fcoe-utils lldpad

Next, the setup itself (instead of ens3f2 and ens3f3 we substitute the names of the CNAs included in the storage network):

$ sudo cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-ens3f2
$ sudo cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-ens3f3
$ sudo vim /etc/fcoe/cfg-ens3f2
$ sudo vim /etc/fcoe/cfg-ens3f3

It's important: If the network interface hardware supports DCB/DCBX, the DCB_REQUIRED parameter must be set to no.

DCB_REQUIRED="yes" → #DCB_REQUIRED="yes"

Next, make sure that adminStatus is disabled on all interfaces, incl. without FCoE enabled:

$ sudo lldptool set-lldp -i ens3f0 adminStatus=disabled
...
$ sudo lldptool set-lldp -i ens3f3 adminStatus=disabled

If there are other network interfaces, you can enable LLDP:

$ sudo systemctl start lldpad
$ sudo systemctl enable lldpad

As mentioned earlier, if a hardware DCB/DCBX is used, the DCB_REQUIRED setting must be included in No. and this step can be skipped.

$ sudo dcbtool sc ens3f2 dcb on
$ sudo dcbtool sc ens3f3 dcb on
$ sudo dcbtool sc ens3f2 app:fcoe e:1
$ sudo dcbtool sc ens3f3 app:fcoe e:1
$ sudo ip link set dev ens3f2 up
$ sudo ip link set dev ens3f3 up
$ sudo systemctl start fcoe
$ sudo systemctl enable fcoe

For network interfaces, check if autostart is enabled:

$ sudo vim /etc/sysconfig/network-scripts/ifcfg-ens3f2
$ sudo vim /etc/sysconfig/network-scripts/ifcfg-ens3f3

ONBOOT=yes

View configured FCoE interfaces, command output must not be empty.

$ sudo fcoeadm -i

The subsequent FCoE setup is done as for normal FC.

This is followed by setting up storage systems and networks - zoning, SAN hosts, creation and presentation of volumes / LUNs, after which the storage can be connected to ovirt hosts: Storage → Domains → New Domain.

Domain Function leave Data, Storage Type - Fiber Channel, Host - any, name - for example, storNN-volMM.

Surely your storage system allows you to connect not just redundant paths, but also balancing. Many modern systems are capable of transmitting data along all paths equally optimally (ALUA active/active).

To enable all paths in the active state, you need to set up multipassing, more on that in the following articles.

NFS and iSCSI setup is done in a similar way.

Storage of ISO images

To install the OS, you will need their installation files, most often available as ISO images. You can use the built-in path, but oVirt has developed a special type of storage for working with images - ISO, which can be targeted to an NFS server. Let's add it:

Storage → Domains → New Domain,
Domain Function → ISO,
Export Path - for example, mynfs01.example.com:/exports/ovirt-iso (at the time of connection, the folder must be empty, the manager must be able to write to it),
Name - e.g. mynfs01-iso.

To store images, the manager will create a structure
/exports/ovirt-iso/<some UUID>/images/11111111-1111-1111-1111-111111111111/

If there are already ISO images on our NFS server, to save space, it is convenient to link them to this folder instead of copying files.

First VM

At this stage, you can already create the first virtual machine, install the OS and application software on it.

Compute → Virtual Machines → New

For the new machine, specify a name (Name), create a disk (Instance Images → Create) and connect a network interface (Instantiate VM network interfaces by picking a vNIC profile → select the only ovirtmgmt from the list so far).

On the client side, you need a modern browser and SPICE client to interact with the console.

The first machine has been successfully launched. However, for a more complete operation of the system, a number of additional settings are required, which we will continue in the following articles.

Source: habr.com

Add a comment