NetBSD Project Developers
NVMM includes a driver that runs at the system kernel level and coordinates access to hardware virtualization mechanisms, and the Libnvmm stack that runs in user space. Interaction between kernel and user space components is done through IOCTL. A feature of NVMM that distinguishes it from hypervisors such as KVM is
At the same time, Libnvmm itself does not contain emulator functions, but only provides an API that allows you to integrate NVMM support into existing emulators, for example, in QEMU. The API covers such functions as creating and starting a virtual machine, allocating memory to a guest system, allocating VCPUs. To increase security and reduce possible attack vectors, libnvmm provides only explicitly requested functions - by default, complex handlers are not called automatically and may not be used at all if they can be dispensed with. NVMM tries to keep things simple without getting too complicated and allows you to control as many aspects of your work as possible.
The kernel-level part of NVMM is fairly tightly integrated with the NetBSD kernel, and can achieve performance improvements by reducing the number of context switches between the guest OS and the host environment. On the user-space side, libnvmm tries to aggregate common I/O operations and avoid needless system calls. The memory allocation system is based on the pmap subsystem, which allows guest memory pages to be evicted to the swap partition if the system runs out of memory. NVMM is free from global locks and scales well, allowing you to use different CPU cores to run different guest virtual machines at the same time.
Based on QEMU, a solution has been prepared that uses NVMM to enable hardware virtualization mechanisms. Work is underway to include prepared patches in the main composition of QEMU. The QEMU + NVMM bundle is already
Source: opennet.ru