[PATCH v9 7/7] iommu/arm-smmu-v3: Perform per-domain invalidations using arm_smmu_invs
Nicolin Chen
nicolinc at nvidia.com
Fri Jan 23 09:47:52 PST 2026
On Fri, Jan 23, 2026 at 05:07:15PM +0000, Will Deacon wrote:
> On Fri, Dec 19, 2025 at 12:11:29PM -0800, Nicolin Chen wrote:
> > Replace the old invalidation functions with arm_smmu_domain_inv_range() in
> > all the existing invalidation routines. And deprecate the old functions.
> >
> > The new arm_smmu_domain_inv_range() handles the CMDQ_MAX_TLBI_OPS as well,
> > so drop it in the SVA function.
> >
> > Since arm_smmu_cmdq_batch_add_range() has only one caller now, and it must
> > be given a valid size, add a WARN_ON_ONCE to catch any missed case.
> >
> > Reviewed-by: Jason Gunthorpe <jgg at nvidia.com>
> > Signed-off-by: Nicolin Chen <nicolinc at nvidia.com>
> > ---
> > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 7 -
> > .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 29 +--
> > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 165 +-----------------
> > 3 files changed, 11 insertions(+), 190 deletions(-)
>
> It's one thing replacing the invalidation implementation but I think you
> need to update some of the old ordering comments, too. In particular,
> the old code relies on the dma_wmb() during cmdq insertion to order
> updates to in-memory structures, which includes the pgtable in non-strict
> mode.
>
> I don't think any of that is true now?
OK. I'll update those ordering comments. Combining your latest
suggestion of using dma_mb() v.s. smp_mb(). I assume this will
be just doing s/dma_wmb/dma_mb in those comments.
Nicolin
More information about the linux-arm-kernel
mailing list