[PATCH v7 7/9] iommu/arm-smmu-v3: Move the CD generation for SVA into a function

Jason Gunthorpe jgg at nvidia.com
Thu Apr 18 07:28:27 PDT 2024


On Thu, Apr 18, 2024 at 12:40:03PM +0800, Michael Shavit wrote:

> > +static void arm_smmu_make_sva_cd(struct arm_smmu_cd *target,
> > +                                struct arm_smmu_master *master,
> > +                                struct mm_struct *mm, u16 asid)
> > +{
> > +       u64 par;
> > +
> > +       memset(target, 0, sizeof(*target));
> > +
> > +       par = cpuid_feature_extract_unsigned_field(
> > +               read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1),
> > +               ID_AA64MMFR0_EL1_PARANGE_SHIFT);
> > +
> > +       target->data[0] = cpu_to_le64(
> > +               CTXDESC_CD_0_TCR_EPD1 |
> > +#ifdef __BIG_ENDIAN
> > +               CTXDESC_CD_0_ENDI |
> > +#endif
> > +               CTXDESC_CD_0_V |
> > +               FIELD_PREP(CTXDESC_CD_0_TCR_IPS, par) |
> > +               CTXDESC_CD_0_AA64 |
> > +               (master->stall_enabled ? CTXDESC_CD_0_S : 0) |
> > +               CTXDESC_CD_0_R |
> > +               CTXDESC_CD_0_A |
> > +               CTXDESC_CD_0_ASET |
> > +               FIELD_PREP(CTXDESC_CD_0_ASID, asid));
> > +
> > +       /*
> > +        * If no MM is passed then this creates a SVA entry that faults
> > +        * everything. arm_smmu_write_cd_entry() can hitlessly go between these
> > +        * two entries types since TTB0 is ignored by HW when EPD0 is set.
> > +        */
> > +       if (mm) {
> > +               target->data[0] |= cpu_to_le64(
> > +                       FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ,
> > +                                  64ULL - vabits_actual) |
> > +                       FIELD_PREP(CTXDESC_CD_0_TCR_TG0, page_size_to_cd()) |
> > +                       FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0,
> > +                                  ARM_LPAE_TCR_RGN_WBWA) |
> > +                       FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0,
> > +                                  ARM_LPAE_TCR_RGN_WBWA) |
> > +                       FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS));
> > +
> > +               target->data[1] = cpu_to_le64(virt_to_phys(mm->pgd) &
> > +                                             CTXDESC_CD_1_TTB0_MASK);
> > +       } else {
> > +               target->data[0] |= cpu_to_le64(CTXDESC_CD_0_TCR_EPD0);
> > +
> > +               /*
> > +                * Disable stall and immediately generate an abort if stall
> > +                * disable is permitted. This speeds up cleanup for an unclean
> > +                * exit if the device is still doing a lot of DMA.
> > +                */
> > +               if (master->stall_enabled &&
> > +                   !(master->smmu->features & ARM_SMMU_FEAT_STALL_FORCE))
> > +                       target->data[0] &=
> > +                               cpu_to_le64(~(CTXDESC_CD_0_S | CTXDESC_CD_0_R));
> 
> 
> This condition looks slightly different from the original one. Does
> this imply a change in behaviour that should be noted in the commit
> message?

You mean because stall_enable is checked? This means the R bit will
not be cleared for non-stalling devices.

Yeah, that probably shouldn't be changed in this patch, I'll adjust it.

But I think the original commit is slightly off as the PCI modes
shouldn't be changing behavior. Issuing a non-translated MemRd/Wr to
non-present IOVA should always abort and always log an event
regardless of what state the mm is in. Devices need to ensure that
their HW only issues ATS for SVA PASIDs.

Thanks,
Jason



More information about the linux-arm-kernel mailing list