Reiser5 file system benchmark results published

The results of performance tests of the Reiser5 project, which develops a significantly revised version of the Reiser4 file system with support for logical volumes with "parallel scaling", which, unlike traditional RAID, implies the active participation of the file system in the distribution of data between the components of the logical volume, have been published. From an administrator's perspective, a significant difference from RAID is that the components of a parallel-scaling logical volume are formatted block devices.

The benchmark results presented evaluate the performance of common file operations such as writing a file to a logical volume, reading a file from a logical volume composed of a variable number of SSDs. We also measured the performance of operations on logical volumes, such as adding a device to a logical volume, removing a device from a logical volume, flushing data from proxy drives, and migrating regular (not special) file data to a specified device.

4 solid-state drives (SSDs) were used for the layout of the volumes. The speed of an operation on a logical volume is defined as the ratio of the amount of space used on the entire logical volume to the time it takes to complete the operation, including full synchronization with drives.

The speed of any operation (with the exception of flushing data from a proxy disk to a volume composed of a small number of devices) is faster than the speed of copying data from one device to another. At the same time, with an increase in the number of devices from which the volume is composed, the speed of operations increases. The exception is the file migration operation, the speed of which tends asymptotically (from above) to the speed of writing to the target device. Low-level sequential access: Device Read, M/s Write, M/s DEV1 470 390 DEV2 530 420 Large file sequential read/write (M/s): Number of disks in volume Write Read 1 (DEV1) 380 460 1 (DEV2) 410 518 2 (DEV1+DEV2) 695 744 3 (DEV1+DEV2+DEV3) 890 970 4 (DEV1+DEV2+DEV3+DEV4) 950 1100 Sequential data copy from/to formatted device From device To device Speed ​​(M/s) DEV1 DEV2 260 DEV2 DEV1 255 Add device to logical volume: Volume Device to add Speed ​​(M/s) DEV1 DEV2 284 DEV1+DEV2 DEV3 457 DEV1+DEV2+DEV3 DEV4 574 Remove device from logical volume: Volume Device to remove Speed ​​(M/s) DEV1+DEV2+DEV3+DEV4 DEV4 890 DEV1+DEV2+DEV3 DEV3 606 DEV1+DEV2 DEV2 336 Proxy data reset: Volume Proxy disk Speed ​​(M/s) DEV1 DEV4 228 DEV1+DEV2 DEV4 244 DEV1+DEV2+ DEV3 DEV4 290 DEV1 RAM0 283 DEV1+DEV2 RAM0 301 DEV1+DEV2+DEV3 RAM0 374 DEV1+DEV2+DEV3+DEV4 RAM0 427 File Migration Volume Target Device Speed ​​(M/s) DEV1+DEV2+DEV3+DEV4 DEV1 387 DEV1+DEV2 +DEV3 DEV1 403 DEV1+DEV2 DEV1 427

It is noted that performance can be further improved if the procedure for issuing I / O requests to the components of the logical volume is parallelized (now, for simplicity, this is done in a loop by one thread). And also if you read only the data that is subject to movement during rebalancing (now, for simplicity, all data is read). The theoretical limit for the speed of adding/removing a second device in systems with parallel scaling is twice the speed of copying from the first disk to the second (respectively, from the second to the first). Now the speeds of adding and removing the second disk are 1.1 and 1.3 copy speeds respectively.

In addition, an O(1) defragmenter has been announced that will process all components of a logical volume (including the proxy disk) in parallel, i.e. for a time not exceeding the processing time of the largest component separately.

Source: opennet.ru

Add a comment