[PATCH RESEND v9 08/13] iommu/arm-smmu-v3: Share process page tables

Jean-Philippe Brucker jean-philippe at linaro.org
Thu Sep 17 10:37:55 EDT 2020


On Tue, Sep 08, 2020 at 09:41:20AM +0200, Auger Eric wrote:
> > +static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm)
> > +{
> > +	u16 asid;
> > +	int err = 0;
> > +	u64 tcr, par, reg;
> > +	struct arm_smmu_ctx_desc *cd;
> > +	struct arm_smmu_ctx_desc *ret = NULL;
> > +
> > +	asid = arm64_mm_context_get(mm);
> > +	if (!asid)
> > +		return ERR_PTR(-ESRCH);
> > +
> > +	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
> > +	if (!cd) {
> > +		err = -ENOMEM;
> > +		goto out_put_context;
> > +	}
> > +
> > +	refcount_set(&cd->refs, 1);
> > +
> > +	mutex_lock(&arm_smmu_asid_lock);
> > +	ret = arm_smmu_share_asid(mm, asid);
> > +	if (ret) {
> > +		mutex_unlock(&arm_smmu_asid_lock);
> > +		goto out_free_cd;
> > +	}
> > +
> > +	err = xa_insert(&arm_smmu_asid_xa, asid, cd, GFP_KERNEL);
> > +	mutex_unlock(&arm_smmu_asid_lock);
> I am not clear about the locking scope. Can't we release the lock before
> as if I understand correctly xa_insert/xa_erase takes the xa_lock.

The mutex prevents conflicts with the private ASID allocation:

	CPU 1					CPU 2
arm_smmu_alloc_shared_cd()		arm_smmu_attach_dev()
 arm_smmu_share_asid(mm, 1)		 arm_smmu_domain_finalise_s1()
  xa_load(&asid_xa, 1) -> NULL, ok
					  xa_alloc(&asid_xa) -> ASID #1
 xa_insert(&asid_xa, 1) -> error

> > +
> > +	if (err)
> > +		goto out_free_asid;
> > +
> > +	tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - VA_BITS) |
> Wondering if no additional check is needed to check if the T0SZ is valid
> as documented in 5.4 Context Descriptor T0SZ description.

Indeed, this code probably predates 52-bit VA support. Now patch 10 should
check the VA limits, and we should use vabits_actual rather than VA_BITS.
I'll leave out IDR3.STT for now because arch/arm64/ doesn't support it.

Thanks,
Jean



More information about the linux-arm-kernel mailing list