oVirt in 2 hours. Part 1: Open Fault Tolerant Virtualization Platform

Introduction

open source project oVirt is a free enterprise-grade virtualization platform. Scrolling through habr, I found that oVirt is not covered as widely as it deserves.
oVirt is actually upstream for the commercial Red Hat Virtualization (RHV, formerly RHEV) system, growing under the wing of Red Hat. To avoid confusion, this not the same as CentOS vs RHEL, the model is closer to Fedora vs RHEL.
Under the hood - KVM, the web interface is used for management. Based on RHEL/CentOS 7 OS.
oVirt can be used for both "traditional" server and desktop virtualization (VDI), unlike the VMware solution, both systems can coexist in one complex.
Project well documented, has long reached maturity for productive use and is ready for high loads.
This article is the first in a series on how to build a working failover cluster. After going through them, in a short time (about 2 hours) we will get a fully working system, although a number of issues, of course, cannot be disclosed, I will try to cover them in the following articles.
We have been using it for several years, we started with version 4.1. Our industrial system now lives on 480th generation HPE Synergy 460 and ProLiant BL10c computes with Xeon Gold CPUs.
At the time of writing, the current version is 4.3.

Articles

  1. Introduction (We are here)
  2. Installing the manager (ovirt-engine) and hypervisors (hosts)
  3. Additional settings

Functional features

There are 2 main entities in oVirt: ovirt-engine and ovirt-host(s). For those who are familiar with VMware products, oVirt as a whole as a platform is vSphere, ovirt-engine - the control layer - performs the same functions as vCenter, and ovirt-host is a hypervisor, like ESX (i). Because vSphere is a very popular solution, sometimes I will compare it with it.
oVirt in 2 hours. Part 1: Open Fault Tolerant Virtualization Platform
Rice. 1 - oVirt control panel.

Most Linux distributions and Windows versions are supported as guest machines. For guest machines, there are agents and optimized virtual devices and virtio drivers, primarily a disk controller and a network interface.
To implement a fault-tolerant solution and all the interesting features, you will need shared storage. Both block FC, FCoE, iSCSI and file NFS storages are supported, etc. To implement a fault-tolerant solution, the storage system must also be fault-tolerant (at least 2 controllers, multipassing).
The use of local storages is possible, but by default only shared storages are suitable for a real cluster. Local storages make the system a disparate set of hypervisors, and even with shared storage, a cluster cannot be assembled. The most correct way is diskless machines with boot from SAN, or disks of the minimum size. Probably, through the vdsm hook, it is possible to build from local disks of Software Defined Storage (for example, Ceph) and present its VM, but I did not seriously consider it.

Architecture

oVirt in 2 hours. Part 1: Open Fault Tolerant Virtualization Platform
Rice. 2 - oVirt architecture.
More information about the architecture can be found in documentation developer.

oVirt in 2 hours. Part 1: Open Fault Tolerant Virtualization Platform
Rice. 3 - oVirt objects.

The top element in the hierarchy − Data Center. It determines whether shared or local storage is used, as well as the feature set used (compatibility, 4.1 to 4.3). There may be one or more. For many options, using the default Data Center is Default.
The Data Center consists of one or more clusters. The cluster determines the type of processor, migration policies, etc. For small installations, you can also limit yourself to the Default cluster.
The cluster, in turn, consists of Host's that perform the main work - they carry virtual machines, storages are connected to them. The cluster assumes 2 or more hosts. Although it is technically possible to make a cluster with 1 host, this is not of practical use.

oVirt supports many features, incl. live migration of virtual machines between hypervisors (live migration) and storages (storage migration), desktop virtualization (virtual desktop infrastructure) with VM pools, statefull and stateless VMs, support for NVidia Grid vGPU, import from vSphere, KVM, there is a powerful API and much more. All of these features are available royalty-free and, if needed, support can be purchased from Red Hat through regional partners.

About RHV prices

The cost is not high compared to VMware, only support is purchased - without the requirement to purchase the license itself. Support is purchased only for hypervisors, ovirt-engine, unlike vCenter Server, does not require spending.

Calculation example for the 1st year of ownership

Consider a cluster of 4 2 socket machines and retail prices (no project discounts).
RHV Standard Subscription costs $999 per socket/year (premium 365/24/7 - $1499), total 4*2*$999=$7992.
vSphere price:

  • VMware vCenter Server Standard $10,837.13 per instance plus Basic subscription $2,625.41 (Production $3,125.39);
  • VMware vSphere Standard $1,164.15 + Basic Subscription $552.61 (Production $653.82);
  • VMware vSphere Enterprise Plus $6,309.23 + Basic Subscription $1,261.09 (Production $1,499.94).

Total: 10 + 837,13 + 2 * 625,41 * (4 + 2) = $ 27 for the smallest option. The difference is about 3,5 times!
In oVirt, all functions are available without restrictions.

Brief characteristics and maximums

System Requirements

The hypervisor requires a CPU with hardware virtualization enabled, the minimum amount of RAM to start is 2 GiB, the recommended amount of storage for the OS is 55 GiB (mostly for logs, etc., the OS itself takes up little).
Read more - here.
For Engine minimum requirements 2 cores/4 GiB RAM/25 GiB storage. Recommended - from 4 cores / 16 GiB of RAM / 50 GiB of storage.
As with any system, there are limits on volumes and quantities, most of which exceed the capabilities of available mass commercial servers. Yes, a couple. Intel Xeon 6230 Gold can address 2 TiB of RAM and gives 40 cores (80 threads), which is even less than the limits of one VM.

Virtual Machine Maximums:

  • Maximum concurrently running virtual machines: Unlimited;
  • Maximum virtual CPUs per virtual machine: 384;
  • Maximum memory per virtual machine: 4 TiB;
  • Maximum single disk size per virtual machine: 8 TiB.

Host Maximums:

  • Logical CPU cores or threads: 768;
  • RAM: 12 TiB
  • Number of hosted virtual machines: 250;
  • Simultaneous live migrations: 2 incoming, 2 outgoing;
  • Live migration bandwidth: Default to 52 MiB (~436 Mb) per migration when using the legacy migration policy. Other policies use adaptive throughput values ​​based on the speed of the physical device. QoS policies can limit migration bandwidth.

Manager Logical Entity Maximums:

In 4.3 there are the following limits.

  • Data center
    • Maximum data center count: 400;
    • Maximum host count: 400 supported, 500 tested;
    • Maximum VM count: 4000 supported, 5000 tested;
  • cluster
    • Maximum cluster count: 400;
    • Maximum host count: 400 supported, 500 tested;
    • Maximum VM count: 4000 supported, 5000 tested;
  • Network
    • Logical networks/cluster: 300
    • SDN/external networks: 2600 tested, no enforced limit;
  • Storage
    • Maximum domains: 50 supported, 70 tested;
    • Hosts per domain: No limit;
    • Logical volumes per block domain (more): 1500;
    • Maximum number of LUNs (more): 300;
    • Maximum disk size: 500 TiB (limited to 8 TiB by default).

Implementation options

As already mentioned, oVirt is built from 2 basic elements - ovirt-engine (management) and ovirt-host (hypervisor).
The Engine can be hosted both outside the platform itself (standalone Manager - it can be a VM running on another platform or a separate hypervisor, and even a physical machine), and on the platform itself (self-hosted engine, similar to VMware's VCSA approach).
The hypervisor can be installed on regular OS RHEL/CentOS 7 (EL Host) and specialized minimal OS (oVirt-Node, based on el7).
The hardware requirements for all variants are approximately the same.
oVirt in 2 hours. Part 1: Open Fault Tolerant Virtualization Platform
Rice. 4 - standard architecture.

oVirt in 2 hours. Part 1: Open Fault Tolerant Virtualization Platform
Rice. 5 - Self-hosted Engine architecture.

For myself, I chose the standalone Manager and EL Hosts option:

  • standalone Manager is a little easier with startup problems, there is no chicken and egg dilemma (as for VCSA - you won’t start until at least one host is fully up), but there is a dependence on another system *;
  • EL Host provides the full power of the OS, which is useful for external monitoring, debugging, troubleshooting, and more.

* However, this was not required during the entire period of operation, even after a serious power failure.
But more to the point!
For experimentation, it is possible to release a pair of ProLiant BL460c G7 blades with Xeon® CPU. We will reproduce the installation process on them.
Let's name the nodes ovirt.lab.example.com, kvm01.lab.example.com and kvm02.lab.example.com.
Let's go directly to installation.

Source: habr.com

Add a comment