Release of cache-bench 0.2.0 to investigate the effectiveness of file caching

7 months after the previous release, cache-bench 0.2.0 was released. Cache-bench is a Python script that allows you to evaluate the impact of virtual memory settings (vm.swappiness, vm.watermark_scale_factor, Multigenerational LRU Framework, and others) on the performance of tasks that depend on caching file reads, especially under low memory conditions. The code is open under the CC0 license.

The script code in version 0.2.0 is almost completely rewritten. Now, instead of reading files from the specified directory (in the new version, the -d option has been removed), reading is performed from one file in fragments of the specified size in random order.

Added options:

  • --file is the path to the file to be read from.
  • --chunk - chunk size in kibibytes, 64 by default.
  • --mmap Read from a memory-mapped file object instead of reading from a file descriptor.
  • --preread - before starting the test, pre-read (cache) the specified file by sequential reading in 1 MiB fragments.
  • --bloat - add readable fragments to the list in order to increase the memory consumption of the process and create a further memory shortage.
  • --interval β€” output (logging) interval of results in seconds.

Source: opennet.ru

Add a comment