DRBD 9.2.0 Distributed Replicated Block Device Release

The release of the distributed replicated block device DRBD 9.2.0 has been published. The system is designed as a module for the Linux kernel and distributed under the GPLv1 license. The drbd 2 branch can be used as a transparent replacement for drbd 9.2.0.xx and is fully compatible at the level of protocol, configuration files and utilities.

DRBD makes it possible to combine the drives of cluster nodes into a single fault-tolerant storage. For applications and the system, such storage looks like a block device that is the same for all systems. When using DRBD, all local disk operations are sent to other nodes and synchronized with the disks of other machines. In case of failure of one node, the storage will automatically continue to work at the expense of the remaining nodes. When the availability of a failed node is restored, its state will be automatically brought up to date.

The cluster that forms the storage may include several dozens of nodes located both in the local network and geographically dispersed in different data processing centers. Synchronization in such branched storages is performed using mesh-network technologies (data spreads along the chain from node to node). Nodes can be replicated both synchronously and asynchronously. For example, locally hosted nodes can use synchronous replication, and for remote hosted sites, asynchronous replication can be used with additional compression and traffic encryption.

DRBD 9.2.0 Distributed Replicated Block Device Release

In the new release:

  • Reduced latency for mirrored write requests. Tighter integration with the networking stack has reduced the number of scheduler context switches.
  • Reduced contention between application I/O and resync I/O by optimizing locks when resynchronizing extents.
  • Significantly improved resync performance on backends that use thin provisioning. Performance has been improved by combining trim/discard operations, which take much longer than normal write operations.
  • Added support for network namespaces, which made it possible to integrate with Kubernetes to transfer network replication traffic through a separate network associated with containers, instead of the network of the host environment.
  • Added transport_rdma module to be used as Infiniband/RoCE transport instead of TCP/IP over Ethernet. Using the new transport allows you to reduce latency, reduce the load on the CPU and ensure that data is received without unnecessary copy operations (zero-copy).

Source: opennet.ru

Add a comment