Release of Proxmox VE 7.2, a distribution kit for organizing the work of virtual servers

Proxmox Virtual Environment 7.2, a specialized Linux distribution based on Debian GNU/Linux, aimed at deploying and maintaining virtual servers using LXC and KVM, and capable of acting as a replacement for products such as VMware vSphere, Microsoft Hyper-V and Citrix, has been released hypervisor. The size of the installation iso-image is 994 MB.

Proxmox VE provides the means to deploy a turnkey, web-based industrial grade virtual server system for managing hundreds or even thousands of virtual machines. The distribution has built-in tools for organizing virtual environment backups and clustering support available out of the box, including the ability to migrate virtual environments from one node to another without stopping work. Among the features of the web-interface: support for secure VNC-console; access control to all available objects (VM, storage, nodes, etc.) based on roles; support for various authentication mechanisms (MS ADS, LDAP, Linux PAM, Proxmox VE authentication).

In the new release:

  • Synchronized with the Debian 11.3 package database. Transitioned to Linux 5.15 kernel. Updated QEMU 6.2, LXC 4.0, Ceph 16.2.7 and OpenZFS 2.1.4.
  • Added support for the VirGL driver, which is implemented based on the OpenGL API and provides a virtual GPU for 3D rendering to the guest system without exclusion of direct access to the physical GPU. VirtIO and VirGL support the SPICE remote access protocol by default.
  • Added support for defining backup job annotation templates, in which, for example, you can use substitutions with the name of a virtual machine ({{guestname}}) or cluster ({{cluster}}) to make it easier to find and separate backups.
  • Support for an erase code has been added to the Ceph FS, which allows recovering lost blocks.
  • Updated LXC container templates. Added new templates for Ubuntu 22.04, Devuan 4.0 and Alpine 3.15.
  • In the ISO image, the memtest86+ memory integrity test utility is replaced with a completely rewritten 6.0b version that supports UEFI and modern memory types such as DDR5.
  • Improvements have been made to the web interface. The backup settings section has been redesigned. Added the ability to transfer private keys via the GUI to an external Ceph cluster. Added support for remapping a virtual machine disk or container partition to another guest on the same host.
  • The cluster provides the ability to configure via the web interface the desired range of values ​​for new virtual machine or container identifiers (VMIDs).
  • To make it easier to rewrite the Rust parts of Proxmox VE and Proxmox Mail Gateway, the perlmod crate package is included, allowing you to export Rust modules as Perl packages. In Proxmox, the perlmod crate package is used to transfer data between Rust and Perl code.
  • The code for scheduling events (next-event) has been unified with Proxmox Backup Server, which has been translated to use the perlmod (Perl-to-Rust) binding. In addition to days of the week, time and time ranges, support for binding to specific dates and times (*-12-31 23:50), date ranges (Sat *-1..7 15:00) and repeating ranges (Sat *-1. .7 */30).
  • Provides the ability to override some basic backup restore settings, such as guest system name or memory settings.
  • A new job-init handler has been added to the backup process that can be used to start preparatory work.
  • Improved local resource manager scheduler (pve-ha-lrm) doing the job of running handlers. Increased the number of custom services that can be processed on a single node.
  • The HA Cluster Simulator implements the skip-round command to simplify testing for race conditions.
  • Added "proxmox-boot-tool kernel pin" command to pre-select the kernel version for the next boot, without having to select an item in the boot menu at boot time.
  • The installation image for ZFS provides the ability to configure various compression algorithms (zstd, gzip, etc.).
  • Added dark theme and inline console to Proxmox VE Android app.

Source: opennet.ru

Add a comment