OpenZFS 2.1 release with dRAID support

The release of the OpenZFS 2.1 project, which develops the implementation of the ZFS file system for Linux and FreeBSD, has been published. The project became known as "ZFS on Linux" and was previously limited to developing a module for the Linux kernel, but after the FreeBSD support port was adopted as the main implementation of OpenZFS and was spared from mentioning Linux in the name.

OpenZFS has been tested with Linux kernels 3.10 to 5.13 and all FreeBSD branches from 12.2-RELEASE. The code is distributed under the free CDDL license. OpenZFS is already used by FreeBSD and is included in the Debian, Ubuntu, Gentoo, Sabayon Linux, and ALT Linux distributions. Packages with the new version will soon be prepared for major Linux distributions, including Debian, Ubuntu, Fedora, RHEL/CentOS.

OpenZFS provides an implementation of the ZFS components related to both the operation of the file system and the operation of the volume manager. In particular, the following components are implemented: SPA (Storage Pool Allocator), DMU (Data Management Unit), ZVOL (ZFS Emulated Volume) and ZPL (ZFS POSIX Layer). Additionally, the project provides the ability to use ZFS as a backend for the Luster cluster file system. The project's work is based on original ZFS code imported from the OpenSolaris project and enhanced with improvements and fixes from the Illumos community. The project is being developed with the participation of employees of the Livermore National Laboratory under a contract with the US Department of Energy.

The code is distributed under a free CDDL license, which is incompatible with GPLv2, which does not allow OpenZFS to be integrated into the main branch of the Linux kernel, since mixing code under GPLv2 and CDDL licenses is not allowed. To circumvent this license incompatibility, it was decided to distribute the entire product under the CDDL license as a separately loadable module, which is supplied separately from the core. The stability of the OpenZFS codebase is rated as comparable to other filesystems for Linux.

Major changes:

  • Added support for dRAID (Distributed Spare RAID) technology, which is a variant of RAIDZ with integrated distributed block processing for hot spare. dRAID inherited all the advantages of RAIDZ, but allowed to achieve a significant increase in the speed of storage rebuilding (resilvering) and restoration of redundancy in the array. The dRAID virtual storage is formed from several internal RAIDZ groups, each of which contains devices for storing data and devices for storing parity blocks. The specified groups are distributed across all drives for optimal use of the available disk bandwidth. Instead of a single hot recovery drive, dRAID uses the concept of logically distributing hot recovery blocks across all drives in the array.
    OpenZFS 2.1 release with dRAID support
  • The β€œcompatibility” property (β€œzpool create -o compatibility=off|legacy|file[,file…] pool vdev”) has been implemented, allowing the administrator to select a set of features that should be activated in the pool in order to create portable pools and maintain pool compatibility between different versions of OpenZFS and different platforms.
  • The ability to save statistics on the operation of the pool in the InfluxDB DBMS format, optimized for storing, analyzing and manipulating data in the form of a time series (slices of parameter values ​​at specified time intervals), is provided. To export to the InfluxDB format, the "zpool influxdb" command is proposed.
  • Added support for hot adding memory and CPU.
  • New commands and options:
    • "zpool create -u" - disable automatic mounting.
    • "zpool history -i" - reflection in the history of operations of the duration of the execution of each command.
    • "zpool status" - Added a warning about disks with a non-optimal block size.
    • "zfs send --skip-missing|-s" - ignore missing snapshots when sending a stream for replication.
    • "zfs rename -u" - rename the file system without remounting.
    • arcstat added support for L2ARC statistics and added "-a" (all) and "-p" (parsable) options.
  • Optimizations:
    • Improved interactive I/O performance.
    • Speed ​​up prefetch for parallel data access workloads.
    • Improved scalability by reducing lock contention.
    • Reduced pool import time.
    • Reduced fragmentation of ZIL blocks.
    • Improved performance of recursive operations.
    • Improved memory management.
    • Accelerated loading of the kernel module.

Source: opennet.ru

Add a comment