[PATCH v5 01/17] iommu/arm-smmu-v3: Make STE programming independent of the callers

Jason Gunthorpe jgg at nvidia.com
Thu Feb 15 08:01:35 PST 2024


On Thu, Feb 15, 2024 at 01:49:53PM +0000, Will Deacon wrote:
> Hi Jason,
> 
> On Tue, Feb 06, 2024 at 11:12:38AM -0400, Jason Gunthorpe wrote:
> > As the comment in arm_smmu_write_strtab_ent() explains, this routine has
> > been limited to only work correctly in certain scenarios that the caller
> > must ensure. Generally the caller must put the STE into ABORT or BYPASS
> > before attempting to program it to something else.
> 
> This is looking pretty good now, but I have a few comments inline.

Ok

> > @@ -48,6 +48,21 @@ enum arm_smmu_msi_index {
> >  	ARM_SMMU_MAX_MSIS,
> >  };
> >  
> > +struct arm_smmu_entry_writer_ops;
> > +struct arm_smmu_entry_writer {
> > +	const struct arm_smmu_entry_writer_ops *ops;
> > +	struct arm_smmu_master *master;
> > +};
> > +
> > +struct arm_smmu_entry_writer_ops {
> > +	unsigned int num_entry_qwords;
> > +	__le64 v_bit;
> > +	void (*get_used)(const __le64 *entry, __le64 *used);
> > +	void (*sync)(struct arm_smmu_entry_writer *writer);
> > +};
> 
> Can we avoid the indirection for now, please? I'm sure we'll want it later
> when you extend this to CDs, but for the initial support it just makes it
> more difficult to follow the flow. Should be a trivial thing to drop, I
> hope.

We can.

> > +static void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
> >  {
> > +	unsigned int cfg = FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent[0]));
> > +
> > +	used_bits[0] = cpu_to_le64(STRTAB_STE_0_V);
> > +	if (!(ent[0] & cpu_to_le64(STRTAB_STE_0_V)))
> > +		return;
> > +
> > +	/*
> > +	 * See 13.5 Summary of attribute/permission configuration fields for the
> > +	 * SHCFG behavior. It is only used for BYPASS, including S1DSS BYPASS,
> > +	 * and S2 only.
> > +	 */
> > +	if (cfg == STRTAB_STE_0_CFG_BYPASS ||
> > +	    cfg == STRTAB_STE_0_CFG_S2_TRANS ||
> > +	    (cfg == STRTAB_STE_0_CFG_S1_TRANS &&
> > +	     FIELD_GET(STRTAB_STE_1_S1DSS, le64_to_cpu(ent[1])) ==
> > +		     STRTAB_STE_1_S1DSS_BYPASS))
> > +		used_bits[1] |= cpu_to_le64(STRTAB_STE_1_SHCFG);
> 
> Huh, SHCFG is really getting in the way here, isn't it? 

I wouldn't say that.. It is just a complicated bit of the spec. One of
the things we recently did was to audit all the cache settings and, at
least, we then realized that SHCFG was being subtly used by S2 as
well..

Not sure if that was intentional or if it was just missed from the
spec that the S2 uses the value too.

>From that perspective I view this layout of used to be valuable. It
forces the kind of reflection and rigor that I think is helpful. The
fact we found a thing to improve on by inspection is proof of this
worth to me.

> I think it also means we don't have a "hitless" transition from
> stage-2 translation -> bypass.

Hmm, I didn't notice that. The kunit passed:

[    0.511483] 1..1
[    0.511510]     KTAP version 1
[    0.511551]     # Subtest: arm-smmu-v3-kunit-test
[    0.511592]     # module: arm_smmu_v3_test
[    0.511594]     1..10
[    0.511910]     ok 1 arm_smmu_v3_write_ste_test_bypass_to_abort
[    0.512110]     ok 2 arm_smmu_v3_write_ste_test_abort_to_bypass
[    0.512386]     ok 3 arm_smmu_v3_write_ste_test_cdtable_to_abort
[    0.512631]     ok 4 arm_smmu_v3_write_ste_test_abort_to_cdtable
[    0.512874]     ok 5 arm_smmu_v3_write_ste_test_cdtable_to_bypass
[    0.513075]     ok 6 arm_smmu_v3_write_ste_test_bypass_to_cdtable
[    0.513275]     ok 7 arm_smmu_v3_write_ste_test_cdtable_s1dss_change
[    0.513466]     ok 8 arm_smmu_v3_write_ste_test_s1dssbypass_to_stebypass
[    0.513672]     ok 9 arm_smmu_v3_write_ste_test_stebypass_to_s1dssbypass
[    0.514148]     ok 10 arm_smmu_v3_write_ste_test_non_hitless

Which I see is because it did not test the S2 case...

> I'm inclined to leave it set to "use incoming" all the time; the
> only difference I can see is if you have stage-2 translation and a
> master emitting outer-shareable transactions, in which case they'd now
> be outer-shareable instead of inner-shareable, which I think is harmless.

Broadly it seems to me to make sense that the iommu would try to have
a consistent translation - that bypass and S2 use different
cachability doesn't seem great. But isn't the current S2 value of 0
"non-sharable"?

> Additionally, it looks like there's an existing buglet here in that we
> shouldn't set SHCFG if SMMU_IDR1.ATTR_TYPES_OVR == 0.

Ah because the spec says RES0.. I'll add these two into the pile of
random stuff in part 3

> > +	used_bits[0] |= cpu_to_le64(STRTAB_STE_0_CFG);
> > +	switch (cfg) {
> > +	case STRTAB_STE_0_CFG_ABORT:
> > +	case STRTAB_STE_0_CFG_BYPASS:
> > +		break;
> > +	case STRTAB_STE_0_CFG_S1_TRANS:
> > +		used_bits[0] |= cpu_to_le64(STRTAB_STE_0_S1FMT |
> > +					    STRTAB_STE_0_S1CTXPTR_MASK |
> > +					    STRTAB_STE_0_S1CDMAX);
> > +		used_bits[1] |=
> > +			cpu_to_le64(STRTAB_STE_1_S1DSS | STRTAB_STE_1_S1CIR |
> > +				    STRTAB_STE_1_S1COR | STRTAB_STE_1_S1CSH |
> > +				    STRTAB_STE_1_S1STALLD | STRTAB_STE_1_STRW);
> > +		used_bits[1] |= cpu_to_le64(STRTAB_STE_1_EATS);
> > +		used_bits[2] |= cpu_to_le64(STRTAB_STE_2_S2VMID);
> > +		break;
> > +	case STRTAB_STE_0_CFG_S2_TRANS:
> > +		used_bits[1] |=
> > +			cpu_to_le64(STRTAB_STE_1_EATS);
> > +		used_bits[2] |=
> > +			cpu_to_le64(STRTAB_STE_2_S2VMID | STRTAB_STE_2_VTCR |
> > +				    STRTAB_STE_2_S2AA64 | STRTAB_STE_2_S2ENDI |
> > +				    STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2R);
> > +		used_bits[3] |= cpu_to_le64(STRTAB_STE_3_S2TTB_MASK);
> > +		break;
> 
> With SHCFG fixed, can we go a step further with this and simply identify
> the live qwords directly, rather than on a field-by-field basis? I think
> we should be able to do the same "hitless" transitions you want with the
> coarser granularity.

Not naively, Michael's excellent unit test shows it.. My understanding
of your idea was roughly thus:

void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
{
	unsigned int cfg = FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent[0]));

	used_bits[0] = U64_MAX;
	if (!(ent[0] & cpu_to_le64(STRTAB_STE_0_V)))
		return;

	/*
	 * See 13.5 Summary of attribute/permission configuration fields for the
	 * SHCFG behavior. It is only used for BYPASS, including S1DSS BYPASS,
	 * and S2 only.
	 */
	if (cfg == STRTAB_STE_0_CFG_BYPASS ||
	    cfg == STRTAB_STE_0_CFG_S2_TRANS ||
	    (cfg == STRTAB_STE_0_CFG_S1_TRANS &&
	     FIELD_GET(STRTAB_STE_1_S1DSS, le64_to_cpu(ent[1])) ==
		     STRTAB_STE_1_S1DSS_BYPASS))
		used_bits[1] |= U64_MAX;

	used_bits[0] |= U64_MAX;
	switch (cfg) {
	case STRTAB_STE_0_CFG_ABORT:
	case STRTAB_STE_0_CFG_BYPASS:
		break;
	case STRTAB_STE_0_CFG_S1_TRANS:
		used_bits[0] |= U64_MAX;
		used_bits[1] |= U64_MAX;
		used_bits[2] |= U64_MAX;
		break;
	case STRTAB_STE_0_CFG_NESTED:
		used_bits[0] |= U64_MAX;
		used_bits[1] |= U64_MAX;
		fallthrough;
	case STRTAB_STE_0_CFG_S2_TRANS:
		used_bits[1] |= U64_MAX;
		used_bits[2] |= U64_MAX;
		used_bits[3] |= U64_MAX;
		break;

	default:
		memset(used_bits, 0xFF, sizeof(struct arm_smmu_ste));
		WARN_ON(true);
	}
}

And the failures:

[    0.500676]     ok 1 arm_smmu_v3_write_ste_test_bypass_to_abort
[    0.500818]     ok 2 arm_smmu_v3_write_ste_test_abort_to_bypass
[    0.501014]     ok 3 arm_smmu_v3_write_ste_test_cdtable_to_abort
[    0.501197]     ok 4 arm_smmu_v3_write_ste_test_abort_to_cdtable
[    0.501340]     # arm_smmu_v3_write_ste_test_cdtable_to_bypass: EXPECTATION FAILED at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c:128
[    0.501340]     Expected test_writer.invalid_entry_written == !hitless, but
[    0.501340]         test_writer.invalid_entry_written == 1 (0x1)
[    0.501340]         !hitless == 0 (0x0)
[    0.501489]     not ok 5 arm_smmu_v3_write_ste_test_cdtable_to_bypass
[    0.501787]     # arm_smmu_v3_write_ste_test_bypass_to_cdtable: EXPECTATION FAILED at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c:128
[    0.501787]     Expected test_writer.invalid_entry_written == !hitless, but
[    0.501787]         test_writer.invalid_entry_written == 1 (0x1)
[    0.501787]         !hitless == 0 (0x0)
[    0.501931]     not ok 6 arm_smmu_v3_write_ste_test_bypass_to_cdtable
[    0.502274]     ok 7 arm_smmu_v3_write_ste_test_cdtable_s1dss_change
[    0.502397]     # arm_smmu_v3_write_ste_test_s1dssbypass_to_stebypass: EXPECTATION FAILED at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c:128
[    0.502397]     Expected test_writer.invalid_entry_written == !hitless, but
[    0.502397]         test_writer.invalid_entry_written == 1 (0x1)
[    0.502397]         !hitless == 0 (0x0)
[    0.502473]     # arm_smmu_v3_write_ste_test_s1dssbypass_to_stebypass: EXPECTATION FAILED at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c:129
[    0.502473]     Expected test_writer.num_syncs == num_syncs_expected, but
[    0.502473]         test_writer.num_syncs == 3 (0x3)
[    0.502473]         num_syncs_expected == 2 (0x2)
[    0.502784]     not ok 8 arm_smmu_v3_write_ste_test_s1dssbypass_to_stebypass
[    0.503073]     # arm_smmu_v3_write_ste_test_stebypass_to_s1dssbypass: EXPECTATION FAILED at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c:128
[    0.503073]     Expected test_writer.invalid_entry_written == !hitless, but
[    0.503073]         test_writer.invalid_entry_written == 1 (0x1)
[    0.503073]         !hitless == 0 (0x0)
[    0.503176]     # arm_smmu_v3_write_ste_test_stebypass_to_s1dssbypass: EXPECTATION FAILED at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c:129
[    0.503176]     Expected test_writer.num_syncs == num_syncs_expected, but
[    0.503176]         test_writer.num_syncs == 3 (0x3)
[    0.503176]         num_syncs_expected == 2 (0x2)
[    0.503464]     not ok 9 arm_smmu_v3_write_ste_test_stebypass_to_s1dssbypass
[    0.503807]     ok 10 arm_smmu_v3_write_ste_test_non_hitless

BYPASS -> S1 requires changing overlapping bits in qword 1. The
programming sequence would look like this:

start qw[1] = SHCFG_INCOMING
      qw[1] = SHCFG_INCOMING | S1DSS
      qw[0] = S1 mode
      qw[1] = S1DSS

The two states are sharing qw[1] and BYPASS ignores all of it except
SHCFG_INCOMING. Since bypass would have its qw[1] marked as used due
to the SHCFG there is no way to express that it is not looking at the
other bits.

We'd have to really start doing really hacky things like remove the
SHCFG as a used field entirely - but I think if you do that you break
the entire logic of the design and also go backwards to having
programming that only works if STEs are constructed in certain ways.

Thanks,
Jason



More information about the linux-arm-kernel mailing list