[PATCH v2 0/5] Convert riscv to use the generic iommu page table
Jason Gunthorpe
jgg at nvidia.com
Mon Feb 2 06:37:20 PST 2026
On Mon, Feb 02, 2026 at 02:00:07PM +0000, Robin Murphy wrote:
> > DMA-FQ requires two functionalites from the page table:
> > 1) use gather->freelist to avoid a HW UAF (iommupt always does this)
>
> Nope, correct DMA API usage would almost never unmap an entire table, so
> synchronous non-leaf maintenance in that path still doesn't hurt DMA-FQ
> either (e.g. io-pgtable-arm).
Well, it certainly would hurt workloads like IB MR's which can have
quite alot of IOVA in a single dma_map_sg() and we do want to see the
table levels removed to avoid the waste that Pasha has talked
about. Doing single invalidations of potentially a lot of levels in a
DMA-FQ environment is unnecessary overhead.
But I get your point that simple, say storage, use of the DMA API
wouldn't be bothered by this and you could still get alot of benefit
without using the free list.
> If a pagetable implementation wanted to refcount and eagerly free empty
> tables upon leaf unmaps, then yes it would need deferred freeing, but
> frankly it would be better off just not doing that at all for DMA-FQ anyway
> (as IOVA caching would make it likely to need to repopulate the same level
> of table soon.)
Today it isn't done with refcounts, just if the iova range unmapped
fully contains a table level then the table level can go away too. It
does trim interior page tables for large IOVA allocations but small
ones are unlikely to free anything.
> > The one call to iommu_iotlb_sync() is only for the para-virtualization
> > optimization of narrowing invalidations. It would be nonsensical for a
> > driver to enable this optimization and offer IOMMU_CAP_DEFERRED_FLUSH.
>
> Not necessarily - in the PV case it can be desirable to minimise
> over-invalidation *if* you're trapping for targeted invalidations in strict
> mode. However, depending on the usage pattern it may also be beneficial to
> have non-strict let the FQ mechanism batch up work to minimise the number of
> traps taken - e.g. s390 is in this situation, and is precisely why we added
> IOMMU_DMA_OPTS_SINGLE_QUEUE to help optimise for that.
Okay, so if I understand you right, it should check for
iommu_iotlb_gather_queued() and disable PT_FEAT_FLUSH_RANGE_NO_GAPS
mode entirely? ie there is no point in doing small invalidations if
the caller is going to do a flush all?
This way the user gets to pick using DMA-FQ or DMA-strict ?
Also Intel would probably benefit from .shadow_on_flush too?
Jason
More information about the linux-riscv
mailing list