[RFC PATCH 0/2] virtio nvme
Keith Busch
keith.busch at intel.com
Thu Sep 10 07:02:57 PDT 2015
On Wed, 9 Sep 2015, Ming Lin wrote:
> The goal is to have a full NVMe stack from VM guest(virtio-nvme)
> to host(vhost_nvme) to LIO NVMe-over-fabrics target.
>
> Now there are lots of duplicated code with linux/nvme-core.c and qemu/nvme.c.
> The ideal result is to have a multi level NVMe stack(similar as SCSI).
> So we can re-use the nvme code, for example
>
> .-------------------------.
> | NVMe device register |
> Upper level | NVMe protocol process |
> | |
> '-------------------------'
>
>
>
> .-----------. .-----------. .------------------.
> Lower level | PCIe | | VIRTIO | |NVMe over Fabrics |
> | | | | |initiator |
> '-----------' '-----------' '------------------'
>
> todo:
> - tune performance. Should be as good as virtio-blk/virtio-scsi
> - support discard/flush/integrity
> - need Redhat's help for the VIRTIO_ID_NVME pci id
> - multi level NVMe stack
Hi Ming,
I'll be out for travel for the next week, so I won't have much time to
do a proper review till the following week.
I think it'd be better to get this hierarchy setup to make the most reuse
possible than to have this much code duplication between the existing
driver and emulated qemu nvme. For better or worse, I think the generic
nvme layer is where things are going. Are you signed up with the fabrics
contributors?
More information about the Linux-nvme
mailing list