[PATCH] arm64: contpte: fix set_access_flags() no-op check for SMMU/ATS faults
Catalin Marinas
catalin.marinas at arm.com
Thu Mar 5 09:33:25 PST 2026
Looking at the patch again, some more comments.
On Mon, Mar 02, 2026 at 10:37:51PM -0800, Piotr Jaroszynski wrote:
> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> index bcac4f55f9c1..9868bfe4607c 100644
> --- a/arch/arm64/mm/contpte.c
> +++ b/arch/arm64/mm/contpte.c
> @@ -390,6 +390,23 @@ void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
> }
> EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes);
>
> +static bool contpte_all_subptes_match_access_flags(pte_t *ptep, pte_t entry)
More of a nitpick: since this checks both the flags and write
permission, I'd rename to something else. Maybe contpte_ptep_same() to
somewhat resemble pte_same() used by __ptep_set_access_flags().
> +{
> + pte_t *cont_ptep = contpte_align_down(ptep);
> + const pteval_t access_mask = PTE_RDONLY | PTE_AF | PTE_WRITE | PTE_DIRTY;
We can drop the PTE_DIRTY from the mask as it's not relevant to the
hardware permission. It probably doesn't matter in practice.
> + pteval_t entry_access = pte_val(entry) & access_mask;
> + int i;
> +
> + for (i = 0; i < CONT_PTES; i++) {
> + pteval_t pte_access = pte_val(__ptep_get(cont_ptep + i)) & access_mask;
> +
> + if (pte_access != entry_access)
> + return false;
> + }
> +
> + return true;
> +}
> +
> int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> unsigned long addr, pte_t *ptep,
> pte_t entry, int dirty)
> @@ -399,13 +416,35 @@ int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> int i;
>
> /*
> - * Gather the access/dirty bits for the contiguous range. If nothing has
> - * changed, its a noop.
> + * Check whether all sub-PTEs in the CONT block already have the
> + * requested access flags, using raw per-PTE values rather than the
> + * gathered ptep_get() view.
It's not just about the access flag but AF, dirty and write permission,
all can be changed by this function (and only to a more permissive
setting).
> + *
> + * ptep_get() gathers AF/dirty state across the whole CONT block,
> + * which is correct for CPU TLB semantics: with FEAT_HAFDBS the
> + * hardware may set AF/dirty on any sub-PTE and the CPU TLB treats
> + * the gathered result as authoritative for the entire range. But an
> + * SMMU without HTTU (or with HA/HD disabled in CD.TCR) evaluates
Or CPU equally, we don't force all CPUs in a system to support DBM.
> + * each descriptor individually and will keep faulting on the target
> + * sub-PTE if its flags haven't actually been updated. Gathering can
> + * therefore cause false no-ops when only a sibling has been updated:
> + * - write faults: target still has PTE_RDONLY (needs PTE_RDONLY cleared)
> + * - read faults: target still lacks PTE_AF
> + *
> + * Per Arm ARM (DDI 0487) D8.7.1, any sub-PTE in a CONT range may
> + * become the effective cached translation, so all entries must have
> + * consistent attributes. Check the full CONT block before returning
> + * no-op, and when any sub-PTE mismatches, proceed to update the whole
> + * range.
> */
> - orig_pte = pte_mknoncont(ptep_get(ptep));
> - if (pte_val(orig_pte) == pte_val(entry))
> + if (contpte_all_subptes_match_access_flags(ptep, entry))
> return 0;
>
> + /*
> + * Use raw target pte (not gathered) for write-bit unfold decision.
> + */
> + orig_pte = pte_mknoncont(__ptep_get(ptep));
This is fine since all should have the same PTE_WRITE bit.
Anyway, nothing major, so:
Reviewed-by: Catalin Marinas <catalin.marinas at arm.com>
More information about the linux-arm-kernel
mailing list