[PATCH v5 01/17] iommu/arm-smmu-v3: Make STE programming independent of the callers

Jason Gunthorpe jgg at nvidia.com
Thu Feb 15 13:17:39 PST 2024


On Thu, Feb 15, 2024 at 06:42:37PM +0000, Robin Murphy wrote:

> > > > @@ -48,6 +48,21 @@ enum arm_smmu_msi_index {
> > > >   	ARM_SMMU_MAX_MSIS,
> > > >   };
> > > > +struct arm_smmu_entry_writer_ops;
> > > > +struct arm_smmu_entry_writer {
> > > > +	const struct arm_smmu_entry_writer_ops *ops;
> > > > +	struct arm_smmu_master *master;
> > > > +};
> > > > +
> > > > +struct arm_smmu_entry_writer_ops {
> > > > +	unsigned int num_entry_qwords;
> > > > +	__le64 v_bit;
> > > > +	void (*get_used)(const __le64 *entry, __le64 *used);
> > > > +	void (*sync)(struct arm_smmu_entry_writer *writer);
> > > > +};
> > > 
> > > Can we avoid the indirection for now, please? I'm sure we'll want it later
> > > when you extend this to CDs, but for the initial support it just makes it
> > > more difficult to follow the flow. Should be a trivial thing to drop, I
> > > hope.
> > 
> > We can.
> 
> Ack, the abstraction is really hard to follow, and much of that
> seems entirely self-inflicted in the amount of recalculating
> information which was in-context in a previous step but then thrown
> away.

I'm not sure I understand this can you be more specific? I don't know
what we are throwing away that you see?

> And as best I can tell I think it will still end up doing more CFGIs
> than needed.

I think we've minimized the number of steps and Michael did check it,
even pushed tests for the popular scenarios into the kunit. He found a
case where it was not optimal and it was improved.

Mostafa asked about extra syncs, and you can read my reply explaining
why. We both agreed the sync's are necessary.

The only extra thing I know of is the zeroing of fields. Perhaps we
don't have to do this, but I think we should. Operating with the STE
in a known state seems like the conservative choice.

Regardless if you have a case in mind where there are extra steps lets
try it in the kunit and check.

This is not a performance path, so I wouldn't invest too much in this
question.

> Keeping a single monolithic check-and-update function will be *so* much
> easier to understand and maintain. 

The ops are used by the kunit test suite and I think the kunit is
valuable.

Further I've been looking at the AMD driver and it has the same
problem to solve for its DTE and can use this same solution.  Intel
also has > 128 bit structures too. I already drafted an exploration of
using this algorithm in AMD.

I see a someday future where we will move this to shared core code. In
which case the driver only provides the used and sync operation which
I think is a low driver burden for solving such a tricky shared
problem. There is some more shared complexity here on x86 which needs
to use 128 bit stores if the CPU supports those instructions.

IOW this approach is nice and valuable outside ARM. I would like to
move in a direction where we simply use this shared code for all
multi-qword HW descriptors. We've certainly invested enough in
building it and none of the three drivers have anything better.

> As far as CDs go, anything we might reasonably want to change in a
> live CD is all in the first word so I don't see any value in

Changing from a S1 -> S1 requires updating two qwords in the CD and
that requires the V=0 flow that the current arm_smmu_write_ctx_desc()
doesn't do. It is not that arm_smmu_write_ctx_desc() needs to be
prettier, it needs more functionality.

> > > > +static void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
> > > >   {
> > > > +	unsigned int cfg = FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent[0]));
> > > > +
> > > > +	used_bits[0] = cpu_to_le64(STRTAB_STE_0_V);
> > > > +	if (!(ent[0] & cpu_to_le64(STRTAB_STE_0_V)))
> > > > +		return;
> > > > +
> > > > +	/*
> > > > +	 * See 13.5 Summary of attribute/permission configuration fields for the
> > > > +	 * SHCFG behavior. It is only used for BYPASS, including S1DSS BYPASS,
> > > > +	 * and S2 only.
> > > > +	 */
> > > > +	if (cfg == STRTAB_STE_0_CFG_BYPASS ||
> > > > +	    cfg == STRTAB_STE_0_CFG_S2_TRANS ||
> > > > +	    (cfg == STRTAB_STE_0_CFG_S1_TRANS &&
> > > > +	     FIELD_GET(STRTAB_STE_1_S1DSS, le64_to_cpu(ent[1])) ==
> > > > +		     STRTAB_STE_1_S1DSS_BYPASS))
> > > > +		used_bits[1] |= cpu_to_le64(STRTAB_STE_1_SHCFG);
> > > > > > Huh, SHCFG is really getting in the way here, isn't it?
> > 
> > I wouldn't say that.. It is just a complicated bit of the spec. One of
> > the things we recently did was to audit all the cache settings and, at
> > least, we then realized that SHCFG was being subtly used by S2 as
> > well..
> 
> Yeah, that really shouldn't be subtle; incoming attributes are replaced by
> S1 translation, thus they are relevant to not-S1 configs.

That is a really nice way to summarize the spec! But my remark was
more about the code which isn't so obvious what value it intended to
have for SHCFG on the S2 case.

This doesn't really change anthing about this patch, we'd still have
the above hunk to accurately reflect the SHCFG usage, and we'd still
set SHCFG to 0 in S1 cases where it isn't used by HW, just like today.

> I think it's likely to be significantly more straightforward to give up on
> the switch statement and jump straight into the more architectural paradigm
> at this level, e.g.

I've thought about that, I can make effort to do this, the later
nesting change would probably look nicer in this style.

Thanks,
Jason



More information about the linux-arm-kernel mailing list