[PATCH v2 14/18] iommu/arm-smmu-v3: Support domains with shared CDs
Jason Gunthorpe
jgg at nvidia.com
Wed Jun 14 05:10:08 PDT 2023
On Wed, Jun 14, 2023 at 05:17:03PM +0800, Michael Shavit wrote:
> On Wed, Jun 7, 2023 at 1:09 AM Jason Gunthorpe <jgg at nvidia.com> wrote:
> >
> > On Tue, Jun 06, 2023 at 08:07:50PM +0800, Michael Shavit wrote:
> > > SVA may attach a CD to masters that have different upstream SMMU
> > > devices. The arm_smmu_domain structure can only be attached to a single
> > > upstream SMMU device however.
> >
> > Isn't that pretty much because we don't support replicating
> > invalidations to each of the different SMMU instances?
>
> Looked into this some more, and supporting attach to multiple devices
> is still very hard:
> 1. When an arm_smmu_domain is first attached to a master, it
> initializes an io_pgtable_cfg object whose properties depend on the
> master's upstream SMMU device.
So, this is actually kind of wrong, Robin is working on fixing it.
The mental model you should have is when an iommu_domain is created it
represents a single, definitive, IO page table format. The format is
selected based on creation arguments and the 'master' it will be used
with.
An iommu_domain has a single IO page table in a single format, and
that format doesn't change while the iommu_domain exists.
Even single-instance cases have S1 and S2 IO page table formats
co-existing.
When we talk about multi instance support, it means the iommu_domain -
in whatever fixed IO page table format it uses - can be attached to
any SMMU instance that supports it as a compatible page table format.
ARM doesn't quite reach this model, but once it passes the finalize
step it does. The goal is to move finalize to allocate. So for your
purposes you can ignore the difference.
> domains). So then arm_smmu_domain needs to be split into two,
> arm_smmu_domain and arm_smmu_device_domain with the latter containing
> a per-SMMU device io_pgtable, arm_smmu_ctx_desc and arm_smmu_s2_cfg.
> Each iommu_domain_ops operation now needs to loop over each
> arm_smmu_device_domain.
No, if the instance is not compatible whith the domain then the
attachment fails.
> 2. Some of the iommu_domain fields also depend on the per-SMMU
> io_pgtable_cfg; specifically pgsize_bitmap and geometry.aperture_end.
> These need to be restricted as the domain is attached to more
> devices.
These need to be an exact match then.
> 3. Attaching a domain to a new SMMU device must be prohibited after
> any call to map_pages or if iommu_domain.pgsize_bitmap and
> iommu-domain.geometry.aperture_end have been consumed by any system.
Attach needs to fail, we don't reconfigure the domain.
> The arm-smmu-v3-sva.c implementation avoids all these problems
> because it doesn't need to allocate an io_pgtable; the shared
> arm_smmu_ctx_desc's
This isn't true, it has an IO page table (owned by the MM) and that
page table has all the same properties you were concerned about
above. They all still need to be checked for compatability.
ie from a SMMU HW instance perspective there is no difference between
a S1 or MM owned IO page table. From a code perspective the only
difference should be where the TTBR comes from. The SVA case still
should have a cfg describing what kind of IO pagetable the mm has
created, and that cfg should still technically be checked for
compatability.
Have in mind that SVA is *exactly* the same as a S1 iommu_domain with
PRI enabled except that the TTBR comes from the MM instead of being
internally allocated. There should be no other differences between the
two operating modes, that SVA became its owns special thing needs to
be undone, not doubled down on.
> > I think splitting it into a series to re-organize the way ste/cd stuff
> > works is a nice contained topic.
>
> Should I explicitly resend this patch series as a v3 cutting off
> patches 14 and onwards?
I think it is good to make progress, it looked to me like the first
part stood alone fairly well and was an improvement on its own.
Jason
More information about the linux-arm-kernel
mailing list