[RFC PATCH 0/2] virtio nvme

Ming Lin mlin at kernel.org
Thu Sep 10 10:28:18 PDT 2015


On Thu, 2015-09-10 at 15:38 +0100, Stefan Hajnoczi wrote:
> On Thu, Sep 10, 2015 at 6:48 AM, Ming Lin <mlin at kernel.org> wrote:
> > These 2 patches added virtio-nvme to kernel and qemu,
> > basically modified from virtio-blk and nvme code.
> >
> > As title said, request for your comments.
> >
> > Play it in Qemu with:
> > -drive file=disk.img,format=raw,if=none,id=D22 \
> > -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
> >
> > The goal is to have a full NVMe stack from VM guest(virtio-nvme)
> > to host(vhost_nvme) to LIO NVMe-over-fabrics target.
> 
> Why is a virtio-nvme guest device needed?  I guess there must either
> be NVMe-only features that you want to pass through, or you think the
> performance will be significantly better than virtio-blk/virtio-scsi?

It simply passes through NVMe commands.

Right now performance is poor. Performance tunning is on my todo list.
It should be as good as virtio-blk/virtio-scsi.

> 
> At first glance it seems like the virtio_nvme guest driver is just
> another block driver like virtio_blk, so I'm not clear why a
> virtio-nvme device makes sense.

I think the future "LIO NVMe target" only speaks NVMe protocol.

Nick(CCed), could you correct me if I'm wrong?

For SCSI stack, we have:
virtio-scsi(guest)
tcm_vhost(or vhost_scsi, host)
LIO-scsi-target

For NVMe stack, we'll have similar components:
virtio-nvme(guest)
vhost_nvme(host)
LIO-NVMe-target

> 
> > Now there are lots of duplicated code with linux/nvme-core.c and qemu/nvme.c.
> > The ideal result is to have a multi level NVMe stack(similar as SCSI).
> > So we can re-use the nvme code, for example
> >
> >                         .-------------------------.
> >                         | NVMe device register    |
> >   Upper level           | NVMe protocol process   |
> >                         |                         |
> >                         '-------------------------'
> >
> >
> >
> >               .-----------.    .-----------.    .------------------.
> >  Lower level  |   PCIe    |    | VIRTIO    |    |NVMe over Fabrics |
> >               |           |    |           |    |initiator         |
> >               '-----------'    '-----------'    '------------------'
> 
> You mentioned LIO and SCSI.  How will NVMe over Fabrics be integrated
> into LIO?  If it is mapped to SCSI then using virtio_scsi in the guest
> and tcm_vhost should work.

I think it's not mapped to SCSI.

Nick, would you share more here?

> 
> Please also post virtio draft specifications documenting the virtio device.

I'll do this later.

> 
> Stefan





More information about the Linux-nvme mailing list