[PATCH] nvme: uring_cmd specific request_queue for SGLs

Keith Busch kbusch at kernel.org
Thu Jun 26 08:29:21 PDT 2025


On Thu, Jun 26, 2025 at 07:14:13AM +0200, Christoph Hellwig wrote:
> On Wed, Jun 25, 2025 at 04:08:28PM -0600, Keith Busch wrote:
> > 
> > It looks straight forward to add merging while we iterate for the direct
> > mapping result if it returns mergable iova's, but I think we'd have to
> > commit to using SGL over PRP for everything but the simple case, and
> > drop the PRP imposed virt boundary. The downside might be we'd lose that
> > iova pre-allocation optimization (dma_iova_try_alloc) you have going on,
> > but I'm not sure how important that is. Could the direct mapping get too
> > fragmented to consistently produce contiguous iova's in this path?
> 
> I can't really parse this.  Direct mapping means not using an IOMMU
> mapping, either because there is none or because it is configured to
> do an identity mapping.  In that case we'll never use the IOVA path.

Okay, maybe I'm confused. The code looks like it defaults to the direct
mapping if it can't be coalesced. What if the IOMMU granularity is 8k
against nvme's 4k virt boundary? We still need the IOMMU dma mappings in
the direct mapping fallback, right? They should just appear as different
dma segments.

When it comes to integrity payloads, merged bio's are almost certainly
not eligible to be coalesced in iova space since they're usually
independently allocated in much smaller granularities, so again, I'd
expect we'd get multiple integrity dma segments.
 
> If an IOMMU is configured for dynamic IOMMU mappings we never use the
> direct mapping.  In that case we'd have to do one IOMMU mapping per
> segment with the IOVA mapping path that requires (IOMMU) page alignment,
> which will be very expensive.



More information about the Linux-nvme mailing list