[RFC PATCH 0/2] virtio nvme
Nicholas A. Bellinger
nab at linux-iscsi.org
Thu Sep 17 17:55:52 PDT 2015
On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote:
> On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote:
> > Hi Ming & Co,
> >
> > On Thu, 2015-09-10 at 10:28 -0700, Ming Lin wrote:
> > > On Thu, 2015-09-10 at 15:38 +0100, Stefan Hajnoczi wrote:
> > > > On Thu, Sep 10, 2015 at 6:48 AM, Ming Lin <mlin at kernel.org> wrote:
> > > > > These 2 patches added virtio-nvme to kernel and qemu,
> > > > > basically modified from virtio-blk and nvme code.
> > > > >
> > > > > As title said, request for your comments.
> >
> > <SNIP>
> >
> > > >
> > > > At first glance it seems like the virtio_nvme guest driver is just
> > > > another block driver like virtio_blk, so I'm not clear why a
> > > > virtio-nvme device makes sense.
> > >
> > > I think the future "LIO NVMe target" only speaks NVMe protocol.
> > >
> > > Nick(CCed), could you correct me if I'm wrong?
> > >
> > > For SCSI stack, we have:
> > > virtio-scsi(guest)
> > > tcm_vhost(or vhost_scsi, host)
> > > LIO-scsi-target
> > >
> > > For NVMe stack, we'll have similar components:
> > > virtio-nvme(guest)
> > > vhost_nvme(host)
> > > LIO-NVMe-target
> > >
> >
> > I think it's more interesting to consider a 'vhost style' driver that
> > can be used with unmodified nvme host OS drivers.
> >
> > Dr. Hannes (CC'ed) had done something like this for megasas a few years
> > back using specialized QEMU emulation + eventfd based LIO fabric driver,
> > and got it working with Linux + MSFT guests.
> >
> > Doing something similar for nvme would (potentially) be on par with
> > current virtio-scsi+vhost-scsi small-block performance for scsi-mq
> > guests, without the extra burden of a new command set specific virtio
> > driver.
>
> Trying to understand it.
> Is it like below?
>
> .------------------------. MMIO .---------------------------------------.
> | Guest |--------> | Qemu |
> | Unmodified NVMe driver |<-------- | NVMe device simulation(eventfd based) |
> '------------------------' '---------------------------------------'
> | ^
> write NVMe | | notify command
> command | | completion
> to eventfd | | to eventfd
> v |
> .--------------------------------------.
> | Host: |
> | eventfd based LIO NVMe fabric driver |
> '--------------------------------------'
> |
> | nvme_queue_rq()
> v
> .--------------------------------------.
> | NVMe driver |
> '--------------------------------------'
> |
> |
> v
> .-------------------------------------.
> | NVMe device |
> '-------------------------------------'
>
Correct. The LIO driver on KVM host would be handling some amount of
NVMe host interface emulation in kernel code, and would be able to
decode nvme Read/Write/Flush operations and translate -> submit to
existing backend drivers.
As with the nvme-over-fabric case, it would be possible to do a mapping
between backend driver queue resources for real NVMe hardware (eg:
target_core_nvme), but since it would still be doing close to the same
amount of software emulation for both backend driver cases, I wouldn't
expect there to be much performance advantage over just using normal
submit_bio().
--nab
More information about the Linux-nvme
mailing list