[RFC PATCH 4/5] iommu/arm-smmu-v3: Use pinned VMID for NESTED stage with BTM

Shameerali Kolothum Thodi shameerali.kolothum.thodi at huawei.com
Fri Mar 5 08:51:05 GMT 2021


Hi Jean,

> -----Original Message-----
> From: Jean-Philippe Brucker [mailto:jean-philippe at linaro.org]
> Sent: 04 March 2021 17:11
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi at huawei.com>
> Cc: linux-arm-kernel at lists.infradead.org; iommu at lists.linux-foundation.org;
> kvmarm at lists.cs.columbia.edu; maz at kernel.org;
> alex.williamson at redhat.com; eric.auger at redhat.com;
> zhangfei.gao at linaro.org; Jonathan Cameron
> <jonathan.cameron at huawei.com>; Zengtao (B) <prime.zeng at hisilicon.com>;
> linuxarm at openeuler.org
> Subject: Re: [RFC PATCH 4/5] iommu/arm-smmu-v3: Use pinned VMID for
> NESTED stage with BTM
> 
> Hi Shameer,
> 
> On Mon, Feb 22, 2021 at 03:53:37PM +0000, Shameer Kolothum wrote:
> > If the SMMU supports BTM and the device belongs to NESTED domain
> > with shared pasid table, we need to use the VMID allocated by the
> > KVM for the s2 configuration. Hence, request a pinned VMID from KVM.
> >
> > Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi at huawei.com>
> > ---
> >  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 49
> ++++++++++++++++++++-
> >  1 file changed, 47 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > index 26bf7da1bcd0..04f83f7c8319 100644
> > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > @@ -28,6 +28,7 @@
> >  #include <linux/pci.h>
> >  #include <linux/pci-ats.h>
> >  #include <linux/platform_device.h>
> > +#include <linux/kvm_host.h>
> >
> >  #include <linux/amba/bus.h>
> >
> > @@ -2195,6 +2196,33 @@ static void arm_smmu_bitmap_free(unsigned
> long *map, int idx)
> >  	clear_bit(idx, map);
> >  }
> >
> > +static int arm_smmu_pinned_vmid_get(struct arm_smmu_domain
> *smmu_domain)
> > +{
> > +	struct arm_smmu_master *master;
> > +
> > +	master = list_first_entry_or_null(&smmu_domain->devices,
> > +					  struct arm_smmu_master, domain_head);
> 
> This probably needs to hold devices_lock while using master.

Ok.

> 
> > +	if (!master)
> > +		return -EINVAL;
> > +
> > +	return kvm_pinned_vmid_get(master->dev);
> > +}
> > +
> > +static int arm_smmu_pinned_vmid_put(struct arm_smmu_domain
> *smmu_domain)
> > +{
> > +	struct arm_smmu_master *master;
> > +
> > +	master = list_first_entry_or_null(&smmu_domain->devices,
> > +					  struct arm_smmu_master, domain_head);
> > +	if (!master)
> > +		return -EINVAL;
> > +
> > +	if (smmu_domain->s2_cfg.vmid)
> > +		return kvm_pinned_vmid_put(master->dev);
> > +
> > +	return 0;
> > +}
> > +
> >  static void arm_smmu_domain_free(struct iommu_domain *domain)
> >  {
> >  	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> > @@ -2215,8 +2243,11 @@ static void arm_smmu_domain_free(struct
> iommu_domain *domain)
> >  		mutex_unlock(&arm_smmu_asid_lock);
> >  	}
> >  	if (s2_cfg->set) {
> > -		if (s2_cfg->vmid)
> > -			arm_smmu_bitmap_free(smmu->vmid_map, s2_cfg->vmid);
> > +		if (s2_cfg->vmid) {
> > +			if (!(smmu->features & ARM_SMMU_FEAT_BTM) &&
> > +			    smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
> > +				arm_smmu_bitmap_free(smmu->vmid_map,
> s2_cfg->vmid);
> > +		}
> >  	}
> >
> >  	kfree(smmu_domain);
> > @@ -3199,6 +3230,17 @@ static int arm_smmu_attach_pasid_table(struct
> iommu_domain *domain,
> >  				!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB))
> >  			goto out;
> >
> > +		if (smmu->features & ARM_SMMU_FEAT_BTM) {
> > +			ret = arm_smmu_pinned_vmid_get(smmu_domain);
> > +			if (ret < 0)
> > +				goto out;
> > +
> > +			if (smmu_domain->s2_cfg.vmid)
> > +				arm_smmu_bitmap_free(smmu->vmid_map,
> smmu_domain->s2_cfg.vmid);
> > +
> > +			smmu_domain->s2_cfg.vmid = (u16)ret;
> 
> That will require a TLB invalidation on the old VMID, once the STE is
> rewritten.

True. Will add that.

> More generally I think this pinned VMID set conflicts with that of
> stage-2-only domains (which is the default state until a guest attaches a
> PASID table). Say you have one guest using DOMAIN_NESTED without PASID
> table, just DMA to IPA using VMID 0x8000. Now another guest attaches a
> PASID table and obtains the same VMID from KVM. The stage-2 translation
> might use TLB entries from the other guest, no?  They'll both create
> stage-2 TLB entries with {StreamWorld=NS-EL1, VMID=0x8000}
> 
> It's tempting to allocate all VMIDs through KVM instead, but that will
> force a dependency on KVM to use VFIO_TYPE1_NESTING_IOMMU and might
> break
> existing users of that extension (though I'm not sure there are any).
> Instead we might need to restrict the SMMU VMID bitmap to match the
> private VMID set in KVM.

Right, that is indeed a problem. I will take a look at this suggestion.
 
> Besides we probably want to restrict this feature to systems supporting
> VMID16 on both SMMU and CPUs, or at least check that they are compatible.

Yes. Ideally I would like to detect that in the KVM code and enable/disable the
VMID splitting based on that. But I am yet to figure out an easy way to do that
in KVM.

> > +		}
> > +
> >  		smmu_domain->s1_cfg.cdcfg.cdtab_dma = cfg->base_ptr;
> >  		smmu_domain->s1_cfg.s1cdmax = cfg->pasid_bits;
> >  		smmu_domain->s1_cfg.s1fmt = cfg->vendor_data.smmuv3.s1fmt;
> > @@ -3221,6 +3263,7 @@ static int arm_smmu_attach_pasid_table(struct
> iommu_domain *domain,
> >  static void arm_smmu_detach_pasid_table(struct iommu_domain
> *domain)
> >  {
> >  	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> > +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> >  	struct arm_smmu_master *master;
> >  	unsigned long flags;
> >
> > @@ -3237,6 +3280,8 @@ static void arm_smmu_detach_pasid_table(struct
> iommu_domain *domain)
> >  		arm_smmu_install_ste_for_dev(master);
> >  	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> >
> > +	if (smmu->features & ARM_SMMU_FEAT_BTM)
> > +		arm_smmu_pinned_vmid_put(smmu_domain);
> 
> Aliasing here as well: the VMID is still live but can be reallocated by
> KVM and another domain might obtain it.

Ok. Got it.

Thanks for the review,
Shameer

> 
> Thanks,
> Jean
> 
> >  unlock:
> >  	mutex_unlock(&smmu_domain->init_mutex);
> >  }
> > --
> > 2.17.1
> >



More information about the linux-arm-kernel mailing list