[PATCH 21/27] iommu/arm-smmu-v3: Put the SVA mmu notifier in the smmu_domain

Jason Gunthorpe jgg at nvidia.com
Wed Oct 25 09:23:40 PDT 2023


On Wed, Oct 11, 2023 at 08:25:57PM -0300, Jason Gunthorpe wrote:
> @@ -675,6 +401,8 @@ struct iommu_domain *arm_smmu_sva_domain_alloc(struct device *dev,
>  	struct arm_smmu_master *master = dev_iommu_priv_get(dev);
>  	struct arm_smmu_device *smmu = master->smmu;
>  	struct arm_smmu_domain *smmu_domain;
> +	u32 asid;
> +	int ret;
>  
>  	smmu_domain = arm_smmu_domain_alloc();
>  	if (!smmu_domain)
> @@ -684,5 +412,22 @@ struct iommu_domain *arm_smmu_sva_domain_alloc(struct device *dev,
>  	smmu_domain->domain.ops = &arm_smmu_sva_domain_ops;
>  	smmu_domain->smmu = smmu;
>  
> +	ret = xa_alloc(&arm_smmu_asid_xa, &asid, smmu_domain,
> +		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
> +	if (ret)
> +		goto err_free;

I found a miss here, this patch removes all the xa_loads and this
xa_alloc changes the content of the arm_smmu_asid_xa to point to the
smmu_domain, not the cd.

There is another store in arm_smmu_domain_finalise_s1() that needs
changing as well:

-       ret = xa_alloc(&arm_smmu_asid_xa, &asid, cd,
+       ret = xa_alloc(&arm_smmu_asid_xa, &asid, smmu_domain,

The following patches assume the xarray is filled by smmu_domain. This
only impacts the hard to hit BTM ASID collision support which is why
it hasn't been noticed in testing yet.

Jason



More information about the linux-arm-kernel mailing list