[PATCH 7/9] nvme-pci: convert the data mapping blk_rq_dma_map

Keith Busch kbusch at kernel.org
Tue Jun 17 16:25:09 PDT 2025


On Tue, Jun 17, 2025 at 07:33:46PM +0200, Daniel Gomez wrote:
> On 16/06/2025 13.33, Christoph Hellwig wrote:
> > On Mon, Jun 16, 2025 at 09:41:15AM +0200, Daniel Gomez wrote:
> >> Also, if host segments are between 4k and 16k, PRPs would be able to support it
> >> but this limit prevents that use case. I guess the question is if you see any
> >> blocker to enable this path?
> > 
> > Well, if you think it's worth it give it a spin on a wide variety of
> > hardware.
> 
> I'm not sure if I understand this. Can you clarify why hardware evaluation would
> be required? What exactly?

This is about chaining SGL's so I think the request is benchmarking if
that's faster than splitting commands. Splitting hand been quicker for
much hardware because they could process SQE's in parallel easier than
walking a single command's SG List.

On a slightly related topic, NVMe SGL's don't need the
"virt_boundary_mask". So for devices are optimized for SGL, then that
queue limit could go away, and I've recently heard use cases for the
passthrough interface where that would be useful on avoiding kernel copy
bounce buffers (sorry for the digression).



More information about the Linux-nvme mailing list