[PATCH] nvme: don't set a virt_boundary unless needed

Sagi Grimberg sagi at grimberg.me
Mon Dec 25 01:20:35 PST 2023


>>>> NVMe PRPs are a pain and force the expensive virt_boundary checking on
>>>> block layer, prevent secure passthrough and require scatter/gather I/O
>>>> to be split into multiple commands which is problematic for the upcoming
>>>> atomic write support.
>>>
>>> But is the threshold still correct? meaning for I/Os small enough the
>>> device will have lower performance? I'm not advocating that we keep it,
>>> but we should at least mention the tradeoff in the change log.
>>
>> Chaitanya benchmarked it on the first generation of devices that
>> supported SGLs.  On the only SGL-enabled device I have there is no
>> performance penality for using SGLs on small transfer, but I'd love
>> to see numbers from other setups.
> 
> It's the larger transfers where it gets worse. To exaggerate the
> difference, consider send a 2MB write with virtually aligned but
> discontiguous user buffer: 512 folios.
> 
> PRP fits in 1 prp_page_pool block.
> 
> SGL needs 3 prp_page_pool blocks, tripling the command's memory usage.

My assumption was that in large transfers the memory overhead is
negligible and the controller dma streaming dominates the performance.
The threshold was for minimum transfer to use sgls.

In any event, I think that we are talking about theoretical gains/losses
here. Unless anyone shows a real loss here, we should go with the
simplicity.

Theoretically the driver could add incremental optimizations if the
request buffer can use either prp/sgl, but this is something that
should be explored with real device(s) performance measurements I think.



More information about the Linux-nvme mailing list