NVMe driver within hypervisors

Daniel Stodden daniel.stodden at gmail.com
Mon Dec 1 16:36:11 PST 2014


On Tue, 2014-12-02 at 00:08 +0000, Keith Busch wrote:
> On Mon, 1 Dec 2014, Daniel Stodden wrote:
> > On Mon, 2014-12-01 at 15:07 +0000, Keith Busch wrote:
> >> Correct me if I'm wrong, but doesn't Xen derive device driver support
> >> from its "dom-0"? Just use an nvme-capable Linux guest there and you've
> >> enabled Xen to support nvme, yeah?
> >
> > Correct. Backends in dom0 eventually translate guest I/O to bios issued
> > to a kernel blockdev. Such as an NVMe one, typically implemented by a
> > normal Linux driver.
> 
> I get asked about Xen support a lot, but my experience is limited so I've
> only been guessing when I say it ought to work just fine. No one ever
> takes it to the next level as far as I know. I'm not sure what people
> are waiting for (a commercial offering perhaps?), so I'll give it a shot.
[..]
> Looks like Xen and NVMe have worked for years! Hardly worth a mention
> on nvmexpress.org, though; they don't announce for a particular linux
> distro, so I don't see why xen would get special treatment.

I used to help maintain a fair bit of the block I/O backend code in
XenServer. The backend landscape changed a bit since then.

But for locally attached storage, no backend I'm currently aware of
wouldn't eventually map back to a normal kernel block device driver.

In practice, there may be ways in which a direct map
(guest->blkfront->blkback->nvme) can work out sub-optimally (LBA size,
partition offsets, type of guest), but nothing special I'd expect to
apply here.

And certainly there's a much larger variety on how to map guest I/O
(say, guest->blkfront->blktap->vhd->ext3/4->nvme). 

But they'll all root on standard kernel facilities, and should work
fine. Maxing out throughput may be a different question. But MQ support
may already help with that, and rather be Xen-work to fully utilize.

Daniel




More information about the Linux-nvme mailing list