[PATCH v3 02/15] mm: Batch-clear PTE ranges during zap_pte_range()

Alistair Popple apopple at nvidia.com
Thu Dec 7 17:30:40 PST 2023


Ryan Roberts <ryan.roberts at arm.com> writes:

> Convert zap_pte_range() to clear a set of ptes in a batch. A given batch
> maps a physically contiguous block of memory, all belonging to the same
> folio. This will likely improve performance by a tiny amount due to
> removing duplicate calls to mark the folio dirty and accessed. And also
> provides us with a future opportunity to batch the rmap removal.
>
> However, the primary motivation for this change is to reduce the number
> of tlb maintenance operations that the arm64 backend has to perform
> during exit and other syscalls that cause zap_pte_range() (e.g. munmap,
> madvise(DONTNEED), etc.), as it is about to add transparent support for
> the "contiguous bit" in its ptes. By clearing ptes using the new
> clear_ptes() API, the backend doesn't have to perform an expensive
> unfold operation when a PTE being cleared is part of a contpte block.
> Instead it can just clear the whole block immediately.
>
> This change addresses the core-mm refactoring only, and introduces
> clear_ptes() with a default implementation that calls
> ptep_get_and_clear_full() for each pte in the range. Note that this API
> returns the pte at the beginning of the batch, but with the dirty and
> young bits set if ANY of the ptes in the cleared batch had those bits
> set; this information is applied to the folio by the core-mm. Given the
> batch is garranteed to cover only a single folio, collapsing this state

Nit: s/garranteed/guaranteed/

> does not lose any useful information.
>
> A separate change will implement clear_ptes() in the arm64 backend to
> realize the performance improvement as part of the work to enable
> contpte mappings.
>
> Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>
> ---
>  include/asm-generic/tlb.h |  9 ++++++
>  include/linux/pgtable.h   | 26 ++++++++++++++++
>  mm/memory.c               | 63 ++++++++++++++++++++++++++-------------
>  mm/mmu_gather.c           | 14 +++++++++
>  4 files changed, 92 insertions(+), 20 deletions(-)

<snip>

> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index 4f559f4ddd21..57b4d5f0dfa4 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -47,6 +47,20 @@ static bool tlb_next_batch(struct mmu_gather *tlb)
>  	return true;
>  }
>  
> +unsigned int tlb_get_guaranteed_space(struct mmu_gather *tlb)
> +{
> +	struct mmu_gather_batch *batch = tlb->active;
> +	unsigned int nr_next = 0;
> +
> +	/* Allocate next batch so we can guarrantee at least one batch. */
> +	if (tlb_next_batch(tlb)) {
> +		tlb->active = batch;

Rather than calling tlb_next_batch(tlb) and then undoing some of what it
does I think it would be clearer to factor out the allocation part of
tlb_next_batch(tlb) into a separate function (eg. tlb_alloc_batch) that
you can call from both here and tlb_next_batch().

Otherwise I think this overall direction looks better than trying to
play funny games in the arch layer as it's much clearer what's going on
to core-mm code.

 - Alistair

> +		nr_next = batch->next->max;
> +	}
> +
> +	return batch->max - batch->nr + nr_next;
> +}
> +
>  #ifdef CONFIG_SMP
>  static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_struct *vma)
>  {




More information about the linux-arm-kernel mailing list