[PATCH 04/19] iommu/arm-smmu-v3: Make STE programming independent of the callers

Jason Gunthorpe jgg at nvidia.com
Wed Oct 18 05:24:35 PDT 2023


On Wed, Oct 18, 2023 at 06:54:10PM +0800, Michael Shavit wrote:
> > +       } else if (!step_change) {
> > +               /* cur == target, so all done */
> > +               if (memcmp(cur, target, sizeof(*cur)) == 0)
> > +                       return true;
> Shouldn't this be len * sizeof(*cur)?

Ugh, yes, thank you. An earlier version had cur be a 'struct
arm_smmu_ste', I missed this when I changed it to allow reuse for the
CD path...

> > +       case STRTAB_STE_0_CFG_S1_TRANS:
> > +               used_bits->data[0] |= cpu_to_le64(STRTAB_STE_0_S1FMT |
> > +                                                 STRTAB_STE_0_S1CTXPTR_MASK |
> > +                                                 STRTAB_STE_0_S1CDMAX);
> > +               used_bits->data[1] |=
> > +                       cpu_to_le64(STRTAB_STE_1_S1DSS | STRTAB_STE_1_S1CIR |
> > +                                   STRTAB_STE_1_S1COR | STRTAB_STE_1_S1CSH |
> > +                                   STRTAB_STE_1_S1STALLD | STRTAB_STE_1_STRW);
> > +               used_bits->data[1] |= cpu_to_le64(STRTAB_STE_1_EATS);
> > +
> > +               if (FIELD_GET(STRTAB_STE_1_S1DSS, le64_to_cpu(ent->data[1])) ==
> > +                   STRTAB_STE_1_S1DSS_BYPASS)
> > +                       used_bits->data[1] |= cpu_to_le64(STRTAB_STE_1_SHCFG);
> 
> Although the driver only explicitly sets SHCFG for bypass streams, my
> reading of the spec is it is also accessed for S1 and S2 STEs:
> "The SMMU might convey attributes input from a device through this
> process, so that the device might influence the final transaction
> access, and input attributes might be overridden on a per-device basis
> using the MTCFG/MemAttr, SHCFG, ALLOCCFG STE fields. The input
> attribute, modified by these fields, is primarily useful for setting
> the resulting output access attribute when both stage 1 and stage 2
> translation is bypassed (no translation table descriptors to determine
> attribute) but can also be useful for stage 2-only configurations in
> which a device stream might have finer knowledge about the required
> access behavior than the general virtual machine-global stage 2
> translation tables."

Hm.. I struggled with this for a while.

There is some kind of issue here, we cannot have it both ways where
the S1 translation on a PASID needs SHCFG=0 and the S1DSS_BYPASS needs
SHCFG=1. Either the S1 PASID ignores the field, eg because the IOPTE
supersedes it (what this patch assumes), the S1DSS doesn't need it, or
we cannot use S1DSS at all.

Let me see if we can get a deeper understanding here, it is a good
point.

> > +       case STRTAB_STE_0_CFG_S2_TRANS:
> > +               used_bits->data[1] |= cpu_to_le64(STRTAB_STE_1_EATS);
> > +               used_bits->data[2] |=
> > +                       cpu_to_le64(STRTAB_STE_2_S2VMID | STRTAB_STE_2_VTCR |
> > +                                   STRTAB_STE_2_S2AA64 | STRTAB_STE_2_S2ENDI |
> > +                                   STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2R);
> > +               used_bits->data[3] |= cpu_to_le64(STRTAB_STE_3_S2TTB_MASK);
> > +               break;
> > +
> > +       default:
> > +               memset(used_bits, 0xFF, sizeof(*used_bits));
> 
> Can we consider a WARN here since this driver only ever uses one of
> the above 4 values and we probably have a programming error if we see
> something else.

Ok

> > +static bool arm_smmu_write_ste_step(struct arm_smmu_ste *cur,
> > +                                   const struct arm_smmu_ste *target,
> > +                                   const struct arm_smmu_ste *target_used)
> > +{
> > +       struct arm_smmu_ste cur_used;
> > +       struct arm_smmu_ste step;
> > +
> > +       arm_smmu_get_ste_used(cur, &cur_used);
> > +       return arm_smmu_write_entry_step(cur->data, cur_used.data, target->data,
> > +                                        target_used->data, step.data,
> 
> What's up with requiring callers to allocate and provide step.data if
> it's not used by any of the arm_smmu_write_entry_step callers?

arm_smmu_write_entry_step requires a temporary memory of len bytes -
since varadic stack arrays (ie alloca) are forbidden in the kernel,
and kmalloc would be silly, the simplest solution was to have the
caller allocate it and then pass it in.

Alternatively we could have a max size temporary array inside
arm_smmu_write_entry_step() with some static asserts, but I thought
that was less clear.

> > +                                        cpu_to_le64(STRTAB_STE_0_V),
> This also looks a bit strange at this stage since CD entries aren't
> yet supported..... but sure.

Yeah, this function shim is for the later patch that adds one of these
for CD. Don't want to go and change stuff twice.

For reference the CD function from a later patch is:

static bool arm_smmu_write_cd_step(struct arm_smmu_cd *cur,
				   const struct arm_smmu_cd *target,
				   const struct arm_smmu_cd *target_used)
{
	struct arm_smmu_cd cur_used;
	struct arm_smmu_cd step;

	arm_smmu_get_cd_used(cur, &cur_used);
	return arm_smmu_write_entry_step(cur->data, cur_used.data, target->data,
					 target_used->data, step.data,
					 cpu_to_le64(CTXDESC_CD_0_V),
					 ARRAY_SIZE(cur->data));

}

Thanks,
Jason



More information about the linux-arm-kernel mailing list