[PATCH v9 7/7] iommu/arm-smmu-v3: Perform per-domain invalidations using arm_smmu_invs

Will Deacon will at kernel.org
Fri Jan 23 09:07:15 PST 2026


On Fri, Dec 19, 2025 at 12:11:29PM -0800, Nicolin Chen wrote:
> Replace the old invalidation functions with arm_smmu_domain_inv_range() in
> all the existing invalidation routines. And deprecate the old functions.
> 
> The new arm_smmu_domain_inv_range() handles the CMDQ_MAX_TLBI_OPS as well,
> so drop it in the SVA function.
> 
> Since arm_smmu_cmdq_batch_add_range() has only one caller now, and it must
> be given a valid size, add a WARN_ON_ONCE to catch any missed case.
> 
> Reviewed-by: Jason Gunthorpe <jgg at nvidia.com>
> Signed-off-by: Nicolin Chen <nicolinc at nvidia.com>
> ---
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |   7 -
>  .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   |  29 +--
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   | 165 +-----------------
>  3 files changed, 11 insertions(+), 190 deletions(-)

It's one thing replacing the invalidation implementation but I think you
need to update some of the old ordering comments, too. In particular,
the old code relies on the dma_wmb() during cmdq insertion to order
updates to in-memory structures, which includes the pgtable in non-strict
mode.

I don't think any of that is true now?

Will



More information about the linux-arm-kernel mailing list