[PATCH RFC 00/11] nvmet: Add NVMe target mdev/vfio driver

Hannes Reinecke hare at suse.de
Fri Mar 14 01:31:23 PDT 2025


On 3/13/25 18:17, Mike Christie wrote:
> On 3/13/25 1:47 AM, Christoph Hellwig wrote:
>> On Thu, Mar 13, 2025 at 12:18:01AM -0500, Mike Christie wrote:
>>>
>>> If we agree on a new virtual NVMe driver being ok, why mdev vs vhost?
>>> =====================================================================
>>> The problem with a vhost nvme is:
>>>
>>> 2.1. If we do a fully vhost nvmet solution, it will require new guest
>>> drivers that present NVMe interfaces to userspace then perform the
>>> vhost spec on the backend like how vhost-scsi does.
>>>
>>> I don't want to implement a windows or even a linux nvme vhost
>>> driver. I don't think anyone wants the extra headache.
>>
>> As in a nvme-virtio spec?  Note that I suspect you could use the
>> vhost infrastructure for something that isn't virtio, but it would> be a fair amount of work.
> 
> Yeah, for this option 2.1 I meant a full nvme-virtio spec.
> 
> (forgot to cc Hannes's so cc'ing him now)
> 
And it really is a bit pointless. A nvme-virtio spec would, in the end 
of the day, result in a virtio pci driver in the guest. Which then 
speaks nvme over the virtio protocol.

But we already _have_ a nvme-pci driver, so the benefits of that
approach would be ... questionable.
OTOH, virtio-nvme really should be a fabrics driver, as it's running
nvme over another transport protocol.
Then you could do proper SGL mapping etc.
_But_ you would need another guest driver for that, which brings it's
own set of problems. Not to mention the problem that you would have
to update the spec for that, as you need another transport identifier.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare at suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



More information about the Linux-nvme mailing list