[LSF/MM/BPF TOPIC] Adding NVMeVirt to Kernel mainline

Damien Le Moal dlemoal at kernel.org
Wed Feb 21 23:10:34 PST 2024


On 2/22/24 10:38, Jaehoon Shim wrote:
> Hi all,
> 
> My research group has recently introduced NVMeVirt, a software-defined
> virtual NVMe device implemented as a Linux kernel module. Upon
> loading, NVMeVirt emulates an NVMe device that is recognized by the
> host as a native PCIe device.
> - https://github.com/snu-csl/nvmevirt
> - https://www.usenix.org/system/files/fast23-kim.pdf
> 
> Advantages of NVMeVirt are:
> - Deployable in real environments (not virtual)
> - PCI peer-to-peer DMA support
> - Low-latency device support
> - Multiple namespace support (each namespace can support different command sets)
> - Multiple device support
> - Various command set support (currently supporting ZNS and KV)
> - Accurate performance emulation
> 
> What if we can simplify NVMeVirt and add it to the Kernel mainline
> just like the scsi_debug in the SCSI driver? This would offer an
> accessible tool for developers, especially when NVMe devices with
> specific spec are unavailable, to develop and debug the NVMe driver
> functionalities.

Qemu emulates nvme devices fairly well already.

What is the backing store for you kernel module nvme emulation ? Memory only ?
files or block device ? If it is the latter, how can you do "Accurate
performance emulation". And what does "Accurate performance emulation" mean
anyway ? Different NVMe drives from the same or different vendors have different
performance characteristics. So what exactly are you emulating here ?

> 
> Best regards,
> Jaehoon
> 

-- 
Damien Le Moal
Western Digital Research




More information about the Linux-nvme mailing list