[PATCH v2 2/5] iommu/arm-smmu: Convert to a global static identity domain

Jason Gunthorpe jgg at nvidia.com
Wed Dec 13 05:32:36 PST 2023


On Wed, Dec 13, 2023 at 01:26:52PM +0000, Will Deacon wrote:
> On Tue, Dec 12, 2023 at 10:15:16AM -0400, Jason Gunthorpe wrote:
> > On Tue, Dec 12, 2023 at 01:27:08PM +0000, Will Deacon wrote:
> > > > +static int arm_smmu_attach_dev_identity(struct iommu_domain *domain,
> > > > +					struct device *dev)
> > > > +{
> > > > +	struct arm_smmu_master_cfg *cfg = dev_iommu_priv_get(dev);
> > > > +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> > > > +	struct arm_smmu_device *smmu;
> > > > +	int ret;
> > > > +
> > > > +	if (!cfg)
> > > > +		return -ENODEV;
> > > > +	smmu = cfg->smmu;
> > > > +
> > > > +	ret = arm_smmu_rpm_get(smmu);
> > > > +	if (ret < 0)
> > > > +		return ret;
> > > > +
> > > > +	arm_smmu_master_install_s2crs(cfg, S2CR_TYPE_BYPASS, 0, fwspec);
> > > > +
> > > > +	pm_runtime_set_autosuspend_delay(smmu->dev, 20);
> > > > +	pm_runtime_use_autosuspend(smmu->dev);
> > > 
> > > This is cargo-culted from arm_smmu_attach_dev() with the comments dropped
> > > and it's not clear at all to me that the autosuspend delay makes any sense
> > > for the identity domain.
> > 
> > Indeed, however it was how it worked before this split up.
> 
> Yeah, and I suppose given that I certainly don't have an easy way to test
> this, there's a lot to be said for preserving the current behaviour.

Right, it is why I did it..

> diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> index dde912f8ef35..dec912c27141 100644
> --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
> +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> @@ -82,6 +82,23 @@ static inline void arm_smmu_rpm_put(struct arm_smmu_device *smmu)
>  		pm_runtime_put_autosuspend(smmu->dev);
>  }
>  
> +static void arm_smmu_rpm_use_autosuspend(struct arm_smmu_device *smmu)
> +{
> +	/*
> +	 * Setup an autosuspend delay to avoid bouncing runpm state.
> +	 * Otherwise, if a driver for a suspended consumer device
> +	 * unmaps buffers, it will runpm resume/suspend for each one.
> +	 *
> +	 * For example, when used by a GPU device, when an application
> +	 * or game exits, it can trigger unmapping 100s or 1000s of
> +	 * buffers.  With a runpm cycle for each buffer, that adds up
> +	 * to 5-10sec worth of reprogramming the context bank, while
> +	 * the system appears to be locked up to the user.
> +	 */
> +	pm_runtime_set_autosuspend_delay(smmu->dev, 20);
> +	pm_runtime_use_autosuspend(smmu->dev);
> +}
> +
>  static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
>  {
>  	return container_of(dom, struct arm_smmu_domain, domain);
> @@ -1141,21 +1158,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
>  	/* Looks ok, so add the device to the domain */
>  	arm_smmu_master_install_s2crs(cfg, S2CR_TYPE_TRANS,
>  				      smmu_domain->cfg.cbndx, fwspec);
> -
> -	/*
> -	 * Setup an autosuspend delay to avoid bouncing runpm state.
> -	 * Otherwise, if a driver for a suspended consumer device
> -	 * unmaps buffers, it will runpm resume/suspend for each one.
> -	 *
> -	 * For example, when used by a GPU device, when an application
> -	 * or game exits, it can trigger unmapping 100s or 1000s of
> -	 * buffers.  With a runpm cycle for each buffer, that adds up
> -	 * to 5-10sec worth of reprogramming the context bank, while
> -	 * the system appears to be locked up to the user.
> -	 */
> -	pm_runtime_set_autosuspend_delay(smmu->dev, 20);
> -	pm_runtime_use_autosuspend(smmu->dev);
> -
> +	arm_smmu_rpm_use_autosuspend(smmu);
>  rpm_put:
>  	arm_smmu_rpm_put(smmu);
>  	return ret;
> @@ -1178,9 +1181,7 @@ static int arm_smmu_attach_dev_identity(struct iommu_domain *domain,
>  		return ret;
>  
>  	arm_smmu_master_install_s2crs(cfg, S2CR_TYPE_BYPASS, 0, fwspec);
> -
> -	pm_runtime_set_autosuspend_delay(smmu->dev, 20);
> -	pm_runtime_use_autosuspend(smmu->dev);
> +	arm_smmu_rpm_use_autosuspend(smmu);
>  	arm_smmu_rpm_put(smmu);
>  	return 0;
>  }

Looks good, thanks

Jason



More information about the linux-arm-kernel mailing list