Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

This article is a continuation of the previous one - “Creation of fault-tolerant IT infrastructure. Part 1 - Preparing to Deploy an oVirt 4.3 Cluster».

It will cover the process of basic installation and configuration of an oVirt 4.3 cluster for hosting highly available virtual machines, taking into account the fact that all preliminary steps for preparing the infrastructure have already been completed previously.

Introduction

The main purpose of the article is to provide step-by-step instructions like “Next -> Yes -> Finish"how to show some features when installing and configuring it. The process for deploying your cluster may not always coincide with that described in it, due to the characteristics of the infrastructure and environment, but the general principles will be the same.

From a subjective point of view, oVirt4.3 its functionality is similar to VMware vSphere version 5.x, but of course with its own configuration and operation features.

For those interested, all the differences between RHEV (aka oVirt) and VMware vSphere can be found on the Internet, for example here, but I will still occasionally note some of their differences or similarities with each other as the article progresses.

Separately, I would like to compare a little the work with networks for virtual machines. oVirt implements a similar principle of network management for virtual machines (hereinafter referred to as VMs), as in VMware vSphere:

  • using a standard Linux bridge (in VMware - Standard vSwitch), running on virtualization hosts;
  • using Open vSwitch (OVS) (in VMware - Distributed vSwitch) is a distributed virtual switch consisting of two main components: a central OVN server and OVN controllers on managed hosts.

It should be noted that due to the ease of implementation, the article will describe setting up networks in oVirt for a VM using a standard Linux bridge, which is the standard choice when using the KVM hypervisor.

In this regard, there are several basic rules for working with the network in a cluster, which are best not to be violated:

  • All network settings on hosts before adding them to oVirt must be identical, except for IP addresses.
  • Once a host has been taken under the control of oVirt, it is highly not recommended to change anything manually in the network settings without complete confidence in your actions, since the oVirt agent will simply roll them back to the previous ones after restarting the host or agent.
  • Adding a new network for a VM, as well as working with it, should only be done from the oVirt management console.

Another important note — for a very critical environment (very sensitive to monetary losses), it would still be recommended to use paid support and use Red Hat Virtualization 4.3. During the operation of the oVirt cluster, some issues may arise for which it is advisable to receive qualified help as soon as possible, rather than deal with them yourself.

Finally, recommended Before deploying an oVirt cluster, familiarize yourself with official documentation, in order to be aware of at least the basic concepts and definitions, otherwise it will be a little difficult to read the rest of the article.

Basic to understanding the article and the principles of operation of the oVirt cluster are these guidance documents:

The volume there is not very large, in an hour or two you can quite master the basic principles, but for those who like details, it is recommended to read Product Documentation for Red Hat Virtualization 4.3 — RHEV and oVirt are essentially the same thing.

So, if all the basic settings on the hosts, switches and storage systems have been completed, we proceed directly to the deployment of oVirt.

Part 2. Installing and configuring the oVirt 4.3 cluster

For ease of orientation, I will list the main sections in this article, which must be completed one by one:

  1. Installing the oVirt management server
  2. Creation of a new data center
  3. Creating a new cluster
  4. Installing additional hosts in a Self-Hosted environment
  5. Creating a storage area or Storage Domains
  6. Creating and configuring networks for virtual machines
  7. Creating an installation image for deploying a virtual machine
  8. Create a virtual machine

Installing the oVirt management server

oVirt management server is the most important element in the oVirt infrastructure, in the form of a virtual machine, host, or virtual device that manages the entire oVirt infrastructure.

Its close analogues from the world of virtualization are:

  • VMware vSphere - vCenter Server
  • Microsoft Hyper-V - System Center Virtual Machine Manager (VMM).

To install the oVirt management server, we have two options:

Option 1
Deploying a server in the form of a specialized VM or host.

This option works quite well, but provided that such a VM operates independently of the cluster, i.e. is not running on any cluster host as a regular virtual machine running KVM.

Why can’t such a VM be deployed on cluster hosts?

At the very beginning of the process of deploying the oVirt management server, we have a dilemma - we need to install a management VM, but in fact there is no cluster itself yet, and therefore what can we come up with on the fly? That's right - install KVM on a future cluster node, then create a virtual machine on it, for example, with CentOS OS and deploy the oVirt engine in it. This can usually be done for reasons of complete control over such a VM, but this is a mistaken intention, because in this case, in the future there will 100% be problems with such a control VM:

  • it cannot be migrated in the oVirt console between hosts (nodes) of the cluster;
  • when migrating using KVM via virsh migrate, this VM will not be available for management from the oVirt console.
  • cluster hosts cannot be displayed in Maintenance mode (maintenance mode), if you migrate this VM from host to host using virsh migrate.

So do everything according to the rules - use either a separate host for the oVirt management server, or an independent VM running on it, or better yet, do as written in the second option.

Option 2
Installing oVirt Engine Appliance on a cluster host managed by it.

It is this option that will be considered further as more correct and suitable in our case.
The requirements for such a VM are described below; I will only add that it is recommended to have at least two hosts in the infrastructure on which the control VM can be run in order to make it fault-tolerant. Here I would like to add that, as I already wrote in the comments in the previous article, I was never able to get splitbrain on an oVirt cluster of two hosts, with the ability to run hosted-engine VMs on them.

Installing oVirt Engine Appliance on the first host of the cluster

Link to official documentation - oVirt Self-Hosted Engine Guide, chapter "Deploying the Self-Hosted Engine Using the Command line»

The document specifies the prerequisites that must be met before deploying a hosted-engine VM, and also describes in detail the installation process itself, so there is little point in repeating it verbatim, so we will focus on some important details.

  • Before starting all actions, be sure to enable virtualization support in the BIOS settings on the host.
  • Install the package for the hosted-engine installer on the host:

yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm 
yum -y install epel-release
yum install screen ovirt-hosted-engine-setup

  • We start the procedure for deploying oVirt Hosted Engine in the screen on the host (you can exit it via Ctrl-A + D, close via Ctrl-D):

screen
hosted-engine --deploy

If you wish, you can run the installation with a pre-prepared answer file:

hosted-engine --deploy --config-append=/var/lib/ovirt-hosted-engine-setup/answers/answers-ohe.conf

  • When deploying hosted-engine, we specify all the necessary parameters:

- имя кластера
- количество vCPU и vRAM (рекомендуется 4 vCPU и 16 Гб)
- пароли
- тип хранилища для hosted engine ВМ – в нашем случае FC
- номер LUN для установки hosted engine
- где будет находиться база данных для hosted engine – рекомендую для простоты выбрать Local (это БД PostgreSQL работающая внутри этой ВМ)
и др. параметры. 

  • To install a highly available VM with a hosted engine, we previously created a special LUN on the storage system, number 4 and 150 GB in size, which was then presented to the cluster hosts - see previous article.

Previously we also checked its visibility on hosts:

multipath -ll
…
3600a098000e4b4b3000003c95d171065 dm-3 DELL    , MD38xxf
size=150G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='service-time 0' prio=14 status=active
| `- 15:0:0:4  sdc 8:32  active ready running
`-+- policy='service-time 0' prio=9 status=enabled
  `- 18:0:0:4  sdj 8:144 active ready running

  • The hosted-engine deployment process itself is not complicated; at the end we should receive something like this:

[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20191129131846.conf'
[ INFO  ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ INFO  ] Hosted Engine successfully deployed

We check the presence of oVirt services on the host:

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

If everything was done correctly, then after the installation is complete, use a web browser to go to https://ovirt_hostname/ovirt-engine from the administrator's computer, and click [Administration Portal].

Screenshot of “Administration Portal”

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

By entering the login and password (set during the installation process) into the window as in the screenshot, we get to the Open Virtualization Manager control panel, in which you can perform all actions with the virtual infrastructure:

  1. add data center
  2. add and configure a cluster
  3. add and manage hosts
  4. add storage areas or Storage Domains for virtual machine disks
  5. add and configure networks for virtual machines
  6. add and manage virtual machines, installation images, VM templates

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

All these actions will be discussed further, some in large cells, others in more detail and with nuances.
But first I would recommend reading this add-on, which will probably be useful to many.

Addition

1) In principle, if there is such a need, then nothing prevents you from installing the KVM hypervisor on the cluster nodes in advance using packages libvirt и qemu-sq.m (or qemu-kvm-ev) of the desired version, although when deploying an oVirt cluster node, it can do this itself.

But if the libvirt и qemu-sq.m If you have not installed the latest version, you may receive the following error when deploying a hosted engine:

error: unsupported configuration: unknown CPU feature: md-clear

Those. must have updated version libvirt with protection from MDS, which supports this policy:

<feature policy='require' name='md-clear'/>

Install libvirt v.4.5.0-10.el7_6.12, with md-clear support:

yum-config-manager --disable mirror.centos.org_centos-7_7_virt_x86_64_libvirt-latest_

yum install centos-release-qemu-ev
yum update
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer libguestfs libguestfs-tools dejavu-lgc-sans-fonts virt-top libvirt libvirt-python libvirt-client

systemctl enable libvirtd
systemctl restart libvirtd && systemctl status libvirtd

Check for 'md-clear' support:

virsh domcapabilities kvm | grep require
      <feature policy='require' name='ss'/>
      <feature policy='require' name='hypervisor'/>
      <feature policy='require' name='tsc_adjust'/>
      <feature policy='require' name='clflushopt'/>
      <feature policy='require' name='pku'/>
      <feature policy='require' name='md-clear'/>
      <feature policy='require' name='stibp'/>
      <feature policy='require' name='ssbd'/>
      <feature policy='require' name='invtsc'/>

After this, you can continue installing the hosted engine.

2) In oVirt 4.3, the presence and use of a firewall firewalld is a mandatory requirement.

If during deployment of a VM for hosted-engine we receive the following error:

[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "firewalld is required to be enabled and active in order to correctly deploy hosted-engine. Please check, fix accordingly and re-deploy.n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[https://bugzilla.redhat.com/show_bug.cgi?id=1608467

Then you need to turn off another firewall (if it is used), and install and run firewalld:

yum install firewalld
systemctl enable firewalld
systemctl start firewalld

firewall-cmd --state
firewall-cmd --get-default-zone
firewall-cmd --get-active-zones
firewall-cmd --get-zones

Later, when installing the ovirt agent on a new host for the cluster, it will configure the required ports in firewalld automatically.

3) Rebooting a host with a VM running on it with a hosted engine.

As usual, 1 link и 2 link to governing documents.

All management of the hosted engine VM is done ONLY using the command hosted-engine on the host where it runs, about Virsh we must forget, as well as the fact that you can connect to this VM via SSH and run the command “shutdown».

Procedure for putting a VM into maintenance mode:

hosted-engine --set-maintenance --mode=global

hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host host1.test.local (id: 1) status ==--
conf_on_shared_storage             : True
Status up-to-date                  : True
Hostname                           : host1.test.local
Host ID                            : 1
Engine status                      : {"health": "good", "vm": "up", "detail": "Up"}
Score                              : 3400
stopped                            : False
Local maintenance                  : False
crc32                              : dee1a774
local_conf_timestamp               : 1821
Host timestamp                     : 1821
Extra metadata (valid at timestamp):
        metadata_parse_version=1
        metadata_feature_version=1
        timestamp=1821 (Sat Nov 29 14:25:19 2019)
        host-id=1
        score=3400
        vm_conf_refresh_time=1821 (Sat Nov 29 14:25:19 2019)
        conf_on_shared_storage=True
        maintenance=False
        state=GlobalMaintenance
        stopped=False

hosted-engine --vm-shutdown

We reboot the host with the hosted engine agent and do what we need with it.

After the reboot, check the status of the VM with the hosted engine:

hosted-engine --vm-status

If our VM with hosted-engine does not start and if we see similar errors in the service log:

Error in the service log:

journalctl -u ovirt-ha-agent
...
Jun 29 14:34:44 host1 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start necessary monitors
Jun 29 14:34:44 host1 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call last):#012  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 131, in _run_agent#012    return action(he)#012  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 55, in action_proper#012    return he.start_monitoring()#012  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 413, in start_monitoring#012    self._initialize_broker()#012  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 537, in _initialize_broker#012    m.get('options', {}))#012  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 86, in start_monitor#012    ).format(t=type, o=options, e=e)#012RequestError: brokerlink - failed to start monitor via ovirt-ha-broker: [Errno 2] No such file or directory, [monitor: 'ping', options: {'addr': '172.20.32.32'}]
Jun 29 14:34:44 host1 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent

Then we connect the storage and restart the agent:

hosted-engine --connect-storage
systemctl restart ovirt-ha-agent
systemctl status ovirt-ha-agent

hosted-engine --vm-start
hosted-engine --vm-status

After starting the VM with hosted-engine, we take it out of maintenance mode:

Procedure for removing a VM from maintenance mode:

hosted-engine --check-liveliness
hosted-engine --set-maintenance --mode=none
hosted-engine --vm-status

--== Host host1.test.local (id: 1) status ==--

conf_on_shared_storage             : True
Status up-to-date                  : True
Hostname                           : host1.test.local
Host ID                            : 1
Engine status                      : {"health": "good", "vm": "up", "detail": "Up"}
Score                              : 3400
stopped                            : False
Local maintenance                  : False
crc32                              : 6d1eb25f
local_conf_timestamp               : 6222296
Host timestamp                     : 6222296
Extra metadata (valid at timestamp):
        metadata_parse_version=1
        metadata_feature_version=1
        timestamp=6222296 (Fri Jan 17 11:40:43 2020)
        host-id=1
        score=3400
        vm_conf_refresh_time=6222296 (Fri Jan 17 11:40:43 2020)
        conf_on_shared_storage=True
        maintenance=False
        state=EngineUp
        stopped=False

4) Removing the hosted engine and everything associated with it.

Sometimes it is necessary to properly remove a previously installed hosted engine - link to the guidance document.

Just run the command on the host:

/usr/sbin/ovirt-hosted-engine-cleanup

Next, we remove unnecessary packages, backing up some configs before this, if necessary:

yum autoremove ovirt* qemu* virt* libvirt* libguestfs 

Creation of a new data center

Reference documentation - oVirt Administration Guide. Chapter 4: Data Centers

First let's define what it is data center (I quote from the help) is a logical entity that defines a set of resources used in a specific environment.

A data center is a kind of container consisting of:

  • logical resources in the form of clusters and hosts
  • cluster network resources in the form of logical networks and physical adapters on hosts,
  • storage resources (for VM disks, templates, images) in the form of storage areas (Storage Domains).

A data center can include multiple clusters consisting of multiple hosts with virtual machines running on them, and it can also have multiple storage areas associated with it.
There can be several data centers; they operate independently of each other. Ovirt has a separation of powers by role, and you can configure permissions individually, both at the data center level and on its individual logical elements.

The data center, or data centers if there are several of them, are managed from a single administrative console or portal.

To create a data center, go to the administrative portal and create a new data center:
Compute >> Data Centers >> New

Since we use shared storage on the storage system, the Storage Type should be Shared:

Screenshot of the Data Center Creation Wizard

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

When installing a virtual machine with hosted-engine, a data center is created by default - Datacenter1, and then, if necessary, you can change its Storage Type to another.

Creating a data center is a simple task, without any tricky nuances, and all additional actions with it are described in the documentation. The only thing I will note is that single hosts that have only local storage (disk) for VMs will not be able to get into a data center with Storage Type - Shared (they cannot be added there), and for them you need to create a separate data center - i.e. Each individual host with local storage needs its own separate data center.

Creating a new cluster

Link to documentation - oVirt Administration Guide. Chapter 5: Clusters

Without unnecessary details, Cluster – this is a logical grouping of hosts that have a common storage area (in the form of shared disks on a storage system, as in our case). It is also desirable that the hosts in the cluster be identical in hardware and have the same type of processor (Intel or AMD). It is best, of course, that the servers in the cluster are completely identical.

The cluster is part of a data center (with a specific type of storage - Location or Shared), and all hosts must belong to some kind of cluster, depending on whether they have shared storage or not.

When installing a virtual machine with a hosted-engine on a host, a data center is created by default - Datacenter1, together with the cluster – cluster1, and in the future you can configure its parameters, enable additional options, add hosts to it, etc.

As usual, for details about all cluster settings, it is advisable to refer to the official documentation. Of some of the features of setting up a cluster, I’ll only add that when creating it, it’s enough to configure only the basic parameters on the tab General.

I will note the most important parameters:

  • Processor type — is selected based on which processors are installed on the cluster hosts, what manufacturer they are from, and which processor on the hosts is the oldest, so that, depending on this, all available processor instructions in the cluster are used.
  • switch type – in our cluster we only use Linux bridge, that’s why we choose it.
  • Firewall type – everything is clear here, this is firewalld, which must be enabled and configured on the hosts.

Screenshot with cluster parameters

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

Installing additional hosts in a Self-Hosted environment

Link for documentation.

Additional hosts for a Self-Hosted environment are added in the same way as a regular host, with the additional step of deploying a VM with a hosted engine - Choose hosted engine deployment action >> Deploy. Since the additional host must also be presented with a LUN for a VM with a hosted engine, this means that this host can, if necessary, be used to host a VM with a hosted engine on it.
For fault tolerance purposes, it is highly recommended that there be at least two hosts on which a hosted engine VM can be placed.

On the additional host, disable iptables (if enabled), enable firewalld

systemctl stop iptables
systemctl disable iptables

systemctl enable firewalld
systemctl start firewalld

Install the required KVM version (if necessary):

yum-config-manager --disable mirror.centos.org_centos-7_7_virt_x86_64_libvirt-latest_

yum install centos-release-qemu-ev
yum update
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer libguestfs libguestfs-tools dejavu-lgc-sans-fonts virt-top libvirt libvirt-python libvirt-client

systemctl enable libvirtd
systemctl restart libvirtd && systemctl status libvirtd

virsh domcapabilities kvm | grep md-clear

Install the necessary repositories and the hosted engine installer:

yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm
yum -y install epel-release
yum update
yum install screen ovirt-hosted-engine-setup

Next, go to the console Open Virtualization Manager, add a new host, and do everything step by step, as written in documentation.

As a result, after adding an additional host, we should get something like the picture in the administrative console, as in the screenshot.

Screenshot of the administrative portal - hosts

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

The host on which the hosted-engine VM is currently active has a gold crown and the inscription “Running the Hosted Engine VM", the host on which this VM can be launched if necessary - the inscription "Can run the Hosted Engine VM».

In the event of a host failure on which "Running the Hosted Engine VM", it will automatically restart on the second host. This VM can also be migrated from the active host to the standby host for its maintenance.

Setting up Power Management / fencing on oVirt hosts

Documentation links:

While it may seem like you're done adding and configuring a host, that's not entirely true.
For normal operation of hosts, and to identify/resolve failures with any of them, Power Management / fencing settings are required.

Fencing, or fencing, is the process of temporarily excluding a faulty or failed host from the cluster, during which either the oVirt services on it or the host itself are restarted.

All details on the definitions and parameters of Power Management / fencing are given, as usual, in the documentation; I will only give an example of how to configure this important parameter, as applied to Dell R640 servers with iDRAC 9.

  1. Go to the administrative portal, click Compute >> Hosts select a host.
  2. Click Edit.
  3. Click the tab Power Management.
  4. Check the box next to the option Enable Power Management.
  5. Check the box next to the option Kdump integrationto prevent the host from going into fencing mode while recording a kernel crash dump.

Note.

After enabling Kdump integration on an already running host, it must be reinstalled according to the procedure in oVirt Administration Guide -> Chapter 7: Hosts -> Reinstalling Hosts.

  1. Optionally, you can check the box Disable policy control of power management, if we do not want host power management to be controlled by the cluster's Scheduling Policy.
  2. Click the button (+) to add a new power management device, the agent properties editing window will open.
    For iDRAC9, fill in the fields:

    • Address – iDRAC9 address
    • Username/Password – login and password for logging into iDRAC9, respectively
    • Type —drac5
    • Mark Secure
    • add the following options: cmd_prompt=>,login_timeout=30

Screenshot with “Power Management” parameters in host properties

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

Creating a storage area or Storage Domains

Link to documentation - oVirt Administration Guide, Chapter 8: Storage.

Storage Domain, or storage area, is a centralized location for storing virtual machine disks, installation images, templates, and snapshots.

Storage areas can be connected to the data center using various protocols, cluster and network file systems.

oVirt has three types of storage area:

  • datadomain – to store all data associated with virtual machines (disks, templates). Data Domain cannot be shared between different data centers.
  • ISO Domain (obsolete type of storage area) – for storing OS installation images. ISO Domain can be shared between different data centers.
  • Export Domain (obsolete type of storage area) – for temporary storage of images moved between data centers.

In our particular case, a storage area with the Data Domain type uses Fiber Channel Protocol (FCP) to connect to LUNs on the storage system.

From the point of view of oVirt, when using storage systems (FC or iSCSI), each virtual disk, snapshot or template is a logical disk.
Block devices are assembled into a single unit (on cluster hosts) using Volume Group and then divided using LVM into logical volumes, which are used as virtual disks for VMs.

All these groups and many LVM volumes can be seen on the cluster host using the commands vgs и lvs. Naturally, all actions with such disks should be done only from the oVirt console, except in special cases.

Virtual disks for VMs can be of two types - QCOW2 or RAW. Discs may be "thin"Or"thick". Snapshots are always created as "thin".

The way to manage Storage domains, or storage areas accessed through FC, is quite logical - for each VM virtual disk there is a separate logical volume that is writable by only one host. For FC connections, oVirt uses something like clustered LVM.

Virtual machines located on the same storage area can be migrated between hosts belonging to the same cluster.

As we can see from the description, a cluster in oVirt, like a cluster in VMware vSphere or Hyper-V, essentially means the same thing - it is a logical grouping of hosts, preferably identical in hardware composition, and having common storage for virtual machine disks.

Let's proceed directly to creating a storage area for data (VM disks), since without it the data center will not be initialized.
Let me remind you that all LUNs presented to the cluster hosts on the storage system must be visible on them using the command “multipath -ll».

According to documentation, go to the portal go to Storage >> Domains -> NewDomain and follow the instructions from the "Adding FCP Storage" section.

After launching the wizard, fill in the required fields:

  • Name — set the cluster name
  • Domain Function —Data
  • storage type - Fiber Channel
  • Host to Use — select a host on which the LUN we require is available

In the list of LUNs, mark the one we need, click Add and then OK. If necessary, you can adjust additional parameters of the storage area by clicking on Advanced Parameters.

Screenshot of the wizard for adding “Storage domain”

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

Based on the results of the wizard, we should receive a new storage area, and our data center should move to the status UP, or initialized:

Screenshots of the data center and storage areas in it:

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

Creating and configuring networks for virtual machines

Link to documentation - oVirt Administration Guide, Chapter 6: Logical Networks

Networks, or networks, serve to group logical networks used in the oVirt virtual infrastructure.

To interact between the network adapter on the virtual machine and the physical adapter on the host, logical interfaces such as Linux bridge are used.

To group and divide traffic between networks, VLANs are configured on the switches.

When creating a logical network for virtual machines in oVirt, it must be assigned an identifier corresponding to the VLAN number on the switch so that the VMs can communicate with each other, even if they run on different nodes of the cluster.

Preliminary settings of network adapters on hosts for connecting virtual machines had to be done in previous article – logical interface configured bondxnumx, then all network settings should be made only through the oVirt administrative portal.

After creating a VM with hosted-engine, in addition to the automatic creation of a data center and cluster, a logical network was also automatically created to manage our cluster - ovritmgmt, to which this VM was connected.

If necessary, you can view the logical network settings ovritmgmt and adjust them, but you must be careful not to lose control of the oVirt infrastructure.

Logical network settings ovritmgmt

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

To create a new logical network for regular VMs, in the administrative portal go to Network >> Networks >> New, and on the tab General add a network with the desired VLAN ID, and also check the box next to “VM Network", this means that it can be used for assignment to a VM.

Screenshot of the new VLAN32 logical network

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

On the Advanced tab cluster, we attach this network to our cluster cluster1.

After this we go to Compute >> Hosts, go to each host in turn, to the tab Network interfaces, and launch the wizard Setup host networks, to bind to hosts of a new logical network.

Screenshot of the “Setup host networks” wizard

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

The oVirt agent will automatically make all the necessary network settings on the host - create a VLAN and BRIDGE.

Example configuration files for new networks on the host:

cat ifcfg-bond1
# Generated by VDSM version 4.30.17.1
DEVICE=bond1
BONDING_OPTS='mode=1 miimon=100'
MACADDR=00:50:56:82:57:52
ONBOOT=yes
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no

cat ifcfg-bond1.432
# Generated by VDSM version 4.30.17.1
DEVICE=bond1.432
VLAN=yes
BRIDGE=ovirtvm-vlan432
ONBOOT=yes
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no

cat ifcfg-ovirtvm-vlan432
# Generated by VDSM version 4.30.17.1
DEVICE=ovirtvm-vlan432
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=yes
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no

Let me remind you once again that on the cluster host NOT NECESSARY create network interfaces manually in advance ifcfg-bond1.432 и ifcfg-ovirtvm-vlan432.

After adding a logical network and checking the connection between the host and the hosted engine VM, it can be used in the virtual machine.

Creating an installation image for deploying a virtual machine

Link to documentation - oVirt Administration Guide, Chapter 8: Storage, section Uploading Images to a Data Storage Domain.

Without an OS installation image, it will not be possible to install a virtual machine, although this is of course not a problem if, for example, is installed on the network Cobbler with pre-created images.

In our case, this is not possible, so you will have to import this image into oVirt yourself. Previously, this required creating an ISO Domain, but in the new version of oVirt it has been deprecated, and therefore you can now upload images directly to the Storage domain from the administrative portal.

In the administrative portal go to Storage >> Disks >> Upload >> Home
We add our OS image as an ISO file, fill in all the fields in the form, and click the button "Test connection".

Screenshot of the Add Installation Image Wizard

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

If we get an error like this:

Unable to upload image to disk d6d8fd10-c1e0-4f2d-af15-90f8e636dadc due to a network error. Ensure that ovirt-imageio-proxy service is installed and configured and that ovirt-engine's CA certificate is registered as a trusted CA in the browser. The certificate can be fetched from https://ovirt.test.local/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA`

Then you need to add the oVirt certificate to “Trusted Root CAs"(Trusted Root CA) on the administrator's control station, from where we are trying to download the image.

After adding the certificate to the Trusted Root CA, click again "Test connection", should get:

Connection to ovirt-imageio-proxy was successful.

After you complete the action of adding the certificate, you can try uploading the ISO image to the Storage Domain again.

In principle, you can make a separate Storage Domain with the Data type to store images and templates separately from VM disks, or even store them in a Storage Domain for the hosted engine, but this is at the discretion of the administrator.

Screenshot with ISO images in Storage Domain for hosted engine

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

Create a virtual machine

Link to documentation:
oVirt Virtual Machine Management Guide –> Chapter 2: Installing Linux Virtual Machines
Console Clients Resources

After loading the installation image with the OS into oVirt, you can proceed directly to creating a virtual machine. A lot of work has been done, but we are already at the final stage, for the sake of which all this was started - obtaining a fault-tolerant infrastructure for hosting highly available virtual machines. And all this is absolutely free - not a single penny was spent on purchasing any software licenses.

To create a virtual machine with CentOS 7, the installation image from the OS must be downloaded.

We go to the administrative portal, go to Compute >> Virtual Machines, and launch the VM creation wizard. Fill in all the parameters and fields and click OK. Everything is very simple if you follow the documentation.

As an example, I will give the basic and additional settings of a highly available VM, with a created disk, connected to the network, and booting from an installation image:

Screenshots with highly available VM settings

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

After finishing work with the wizard, close it, launch a new VM and install the OS on it.
To do this, go to the console of this VM through the administrative portal:

Screenshot of the administrative portal settings for connecting to the VM console

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

To connect to the VM console, you must first configure the console in the properties of the virtual machine.

Screenshot of VM settings, “Console” tab

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

To connect to the VM console you can use, for example, Virtual Machine Viewer.

To connect to the VM console directly in the browser window, the connection settings via the console should be as follows:

Creation of fault-tolerant IT infrastructure. Part 2. Installing and configuring an oVirt 4.3 cluster

After installing the OS on the VM, it is advisable to install oVirt guest agent:

yum -y install epel-release
yum install -y ovirt-guest-agent-common
systemctl enable ovirt-guest-agent.service && systemctl restart ovirt-guest-agent.service
systemctl status ovirt-guest-agent.service

Thus, as a result of our actions, the created VM will be highly available, i.e. if the cluster node on which it is running fails, oVirt will automatically restart it on the second node. This VM can also be migrated between cluster hosts for their maintenance or other purposes.

Conclusion

I hope that this article managed to convey that oVirt is a completely normal tool for managing virtual infrastructure, which is not so difficult to deploy - the main thing is to follow certain rules and requirements described both in the article and in the documentation.

Due to the large volume of the article, it was not possible to include many things in it, such as step-by-step execution of various wizards with all the detailed explanations and screenshots, long conclusions of some commands, etc. In fact, this would require writing an entire book, which does not make much sense due to new versions of software constantly appearing with innovations and changes. The most important thing is to understand the principle of how it all works together, and to obtain a general algorithm for creating a fault-tolerant platform for managing virtual machines.

Although we have created a virtual infrastructure, we now need to teach it to interact both between its individual elements: hosts, virtual machines, internal networks, and with the outside world.

This process is one of the main tasks of a system or network administrator, which will be covered in the next article - about the use of VyOS virtual routers in the fault-tolerant infrastructure of our enterprise (as you guessed, they will work as virtual machines on our oVirt cluster).

Source: habr.com

Add a comment