vhost/virtio fabric

Hannes Reinecke hare at suse.de
Fri Feb 19 11:41:00 EST 2021


On 2/9/21 5:41 AM, Chaitanya Kulkarni wrote:
> Roman,
> 
> On 2/8/21 20:29, Roman Shaposhnik wrote:
>> This search made me come across an old set of slides from
>> Christoph Hellwig that did, indeed, mention that at some point
>> vhost/virtio fabric was considered.
>>
>> Please let me know if there's any work that has ever been done
>> in the direction (even if in a form of unmerged patches) or whether
>> if I need this functionality I'd have to implement it from scratch (probably
>> aping a great deal of drivers/vhost/scsi.c).
> 
> At some point of time I've looked into this because that the next logical
> step in the ecosystem development, but I'm not sure if this will be
> acceptable or not or do we need to have change the spec. If there is a strong
> use case I'd like to definitely work on or contribute towards the development
> of this feature.
> 
> I'm interested in knowing what everyone think about this one ...
> 
Well, I had been looking into it for some time, too.
But turns out to be quite challenging conceptual.

Modifying / leveraging virtio for NVMe transport should be trivially, 
but then you'd need to put the virtio device on some emulated hardware, 
which typically (on ix86) is a PCI device.
So it would be a PCI device with virtio queues which transport NVMe 
CQEs/SQEs, begging the question why we need to have it as we _could_ use 
an emulated NVMe PCI device to start with.

But then _that_ would need to support PRPs, and you can't send discovery 
and/or connect commands across that.

So there might be a value in having a virtio transport, just so that you 
can use the fabrics commands.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare at suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



More information about the Linux-nvme mailing list