[LSF/MM/BPF TOPIC] Adding NVMeVirt to Kernel mainline

Jaehoon Shim jmattshim at gmail.com
Thu Feb 22 21:38:30 PST 2024


On Thu, 22 Feb 2024 at 16:10, Damien Le Moal <dlemoal at kernel.org> wrote:
>
> On 2/22/24 10:38, Jaehoon Shim wrote:
> > Hi all,
> >
> > My research group has recently introduced NVMeVirt, a software-defined
> > virtual NVMe device implemented as a Linux kernel module. Upon
> > loading, NVMeVirt emulates an NVMe device that is recognized by the
> > host as a native PCIe device.
> > - https://github.com/snu-csl/nvmevirt
> > - https://www.usenix.org/system/files/fast23-kim.pdf
> >
> > Advantages of NVMeVirt are:
> > - Deployable in real environments (not virtual)
> > - PCI peer-to-peer DMA support
> > - Low-latency device support
> > - Multiple namespace support (each namespace can support different command sets)
> > - Multiple device support
> > - Various command set support (currently supporting ZNS and KV)
> > - Accurate performance emulation
> >
> > What if we can simplify NVMeVirt and add it to the Kernel mainline
> > just like the scsi_debug in the SCSI driver? This would offer an
> > accessible tool for developers, especially when NVMe devices with
> > specific spec are unavailable, to develop and debug the NVMe driver
> > functionalities.
>
> Qemu emulates nvme devices fairly well already.
>
> What is the backing store for you kernel module nvme emulation ? Memory only ?
> files or block device ? If it is the latter, how can you do "Accurate
> performance emulation". And what does "Accurate performance emulation" mean
> anyway ? Different NVMe drives from the same or different vendors have different
> performance characteristics. So what exactly are you emulating here ?
>
> >
> > Best regards,
> > Jaehoon
> >
>
> --
> Damien Le Moal
> Western Digital Research
>

During our evaluation, we have found out it is hard to emulate
low-latency SSDs using FEMU due to virtualization overhead.
Also, it is hard for FEMU to communicate with other real PCIe devices
(e.g., FPGA, GPU, NIC) on the host system.

NVMeVirt's backing store is kernel memory similar to SCSI_DEBUG.
A physical memory range needs to be reserved for NVMeVirt.

The accurate performance emulation means that current NVMeVirt
supports NAND flash SSD FTL performance model just like FEMU. It has a
page-mapping based FTL inside.
However, we support much more features of real SSD FTLs that other
emulators doesn't, such as write buffer, one-shot programing, multiple
FTL instances, etc.

In our research, we have emulated Samsung 970 Pro and Intel Optane SSDs.

We believe that NVMeVirt can open more opportunities to developers
regarding such advantages.



More information about the Linux-nvme mailing list