[PATCH v3 04/19] iommu/arm-smmu-v3: Make STE programming independent of the callers
Mostafa Saleh
smostafa at google.com
Mon Jan 29 12:48:45 PST 2024
On Mon, Jan 29, 2024 at 03:49:10PM -0400, Jason Gunthorpe wrote:
> On Mon, Jan 29, 2024 at 07:10:47PM +0000, Mostafa Saleh wrote:
>
> > > Going forward this will use a V=0 transition instead of cycling through
> > > ABORT if a hitfull change is required. This seems more appropriate as ABORT
> > > will fail DMAs without any logging, but dropping a DMA due to transient
> > > V=0 is probably signaling a bug, so the C_BAD_STE is valuable.
> > Would the driver do anything in that case, or would just print the log message?
>
> Just log, AFAIK.
>
> > > +static bool arm_smmu_write_entry_step(__le64 *cur, const __le64 *cur_used,
> > > + const __le64 *target,
> > > + const __le64 *target_used, __le64 *step,
> > > + __le64 v_bit,
> > I think this is confusing here, I believe we have this as an argument as this
> > function would be used for CD later, however for this series it is unnecessary,
> > IMHO, this should be removed and added in another patch for the CD rework.
>
> It is alot of code churn to do that, even more on the new version.
>
> > > + used_bits->data[0] |= cpu_to_le64(STRTAB_STE_0_CFG);
> > > + switch (FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent->data[0]))) {
> > > + case STRTAB_STE_0_CFG_ABORT:
> > > + break;
> > > + case STRTAB_STE_0_CFG_BYPASS:
> > > + used_bits->data[1] |= cpu_to_le64(STRTAB_STE_1_SHCFG);
> > > + break;
> > > + case STRTAB_STE_0_CFG_S1_TRANS:
> > > + used_bits->data[0] |= cpu_to_le64(STRTAB_STE_0_S1FMT |
> > > + STRTAB_STE_0_S1CTXPTR_MASK |
> > > + STRTAB_STE_0_S1CDMAX);
> > > + used_bits->data[1] |=
> > > + cpu_to_le64(STRTAB_STE_1_S1DSS | STRTAB_STE_1_S1CIR |
> > > + STRTAB_STE_1_S1COR | STRTAB_STE_1_S1CSH |
> > > + STRTAB_STE_1_S1STALLD | STRTAB_STE_1_STRW);
> > > + used_bits->data[1] |= cpu_to_le64(STRTAB_STE_1_EATS);
> > > + break;
> > AFAIU, this is missing something like (while passing smmu->features)
> > used_bits->data[2] |= features & ARM_SMMU_FEAT_NESTING ?
> > cpu_to_le64(STRTAB_STE_2_S2VMID) : 0;
> >
> > As the SMMUv3 manual says:
> > “ For a Non-secure STE when stage 2 is implemented (SMMU_IDR0.S2P == 1)
> > translations resulting from a StreamWorld == NS-EL1 configuration are
> > VMID-tagged with S2VMID when either of stage 1 (Config[0] == 1) or stage 2
> > (Config[1] == 1) provide translation.“
> >
> > Which means in case of S1=>S2 switch or vice versa this algorithm will ignore
> > VMID while it is used.
Yes, In that case we would consider S2VMID even for stage-1 instances only,
even though it should never change and in that case the algorithm will have
the same steps. I guess it might still look confusing, but no strong opinion.
>
> Ah, yes, that is a small miss, thanks. I don't think we need the
> features test though, s2vmid doesn't mean something different if the
> feature is not present..
>
> > > +static void arm_smmu_write_ste(struct arm_smmu_device *smmu, u32 sid,
> > > + struct arm_smmu_ste *ste,
> > > + const struct arm_smmu_ste *target)
> > > +{
> > > + struct arm_smmu_ste target_used;
> > > + int i;
> > > +
> > > + arm_smmu_get_ste_used(target, &target_used);
> > > + /* Masks in arm_smmu_get_ste_used() are up to date */
> > > + for (i = 0; i != ARRAY_SIZE(target->data); i++)
> > > + WARN_ON_ONCE(target->data[i] & ~target_used.data[i]);
> > In what situation this would be triggered, is that for future proofing,
> > maybe we can move it to arm_smmu_get_ste_used()?
>
> Yes, prevent people from making an error down the road.
>
> It can't be in ste_used due to how this specific algorithm works
> iteratively
>
> And in the v4 version it still wouldn't be a good idea at this point
> due to how the series slowly migrates STE and CD programming
> over. There are cases where the current STE will not have been written
> by this code and may not pass this test.
>
> Thanks,
> Jason
More information about the linux-arm-kernel
mailing list