Release of cluster FS Luster 2.13

Published release of cluster file system Chandelier 2.13, used mostly (~60%) largest Linux clusters containing tens of thousands of nodes. Scalability on such large systems is achieved through a multi-component architecture. The key components of Luster are metadata processing and storage servers (MDS), management servers (MGS), object storage servers (OSS), object storage (OST, supports running on top of ext4 and ZFS) and clients.

Release of cluster FS Luster 2.13

All innovations:

  • Implemented persistent client-side cache (Persistent Client Cache), allowing you to use local storage, such as NVMe or NVRAM, as part of the global FS namespace. Clients can cache data associated with newly created or existing files in a locally mounted cache file system (eg ext4). While the current client is running, these files are processed locally at the speed of the local FS, but if another client attempts to access it, they are automatically migrated to the global FS.
  • In routers LNet implemented automatic discovery of routes when using routing along several paths through different network interfaces (Multi-Rail Routing) and increased reliability of configurations with nodes that have multiple network interfaces.
  • Added β€œoverstriping” mode, in which one object store (OST) can contain several copies of stripe blocks for one file, which allows several clients to simultaneously perform joint write operations to a file without waiting for the lock to be released.
  • Appeared support self-extending file layouts (Self-Extending Layouts), increasing the flexibility of using the PFL (Progressive File Layouts) mode in heterogeneous file systems. For example, when the file system includes small storage pools based on fast Flash drives and large disk pools, the proposed feature allows you to write to fast storages first, and after the space runs out, automatically switch to slow disk pools.

Source: opennet.ru

Add a comment