[PATCH v4 02/16] mm: Batch-copy PTE ranges during fork()

David Hildenbrand david at redhat.com
Tue Dec 19 03:29:23 PST 2023


On 19.12.23 09:30, Ryan Roberts wrote:
> On 18/12/2023 17:47, David Hildenbrand wrote:
>> On 18.12.23 11:50, Ryan Roberts wrote:
>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>> batch is determined by the architecture with the new helper,
>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>> all belonging to the same folio. A pte batch is then write-protected in
>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>> set in one go in the child using the new helper, set_ptes_full().
>>>
>>> The primary motivation for this change is to reduce the number of tlb
>>> maintenance operations that the arm64 backend has to perform during
>>> fork, as it is about to add transparent support for the "contiguous bit"
>>> in its ptes. By write-protecting the parent using the new
>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>> when all ptes in the range are being write-protected. Similarly, by
>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>> child, the backend does not need to fold a contiguous range once they
>>> are all populated - they can be initially populated as a contiguous
>>> range in the first place.
>>>
>>> This code is very performance sensitive, and a significant amount of
>>> effort has been put into not regressing performance for the order-0
>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>> which enables the compiler to simplify the extra loops that are added
>>> for batching and produce code that is equivalent (and equally
>>> performant) as the previous implementation.
>>>
>>> This change addresses the core-mm refactoring only and a separate change
>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>> set_ptes_full() in the arm64 backend to realize the performance
>>> improvement as part of the work to enable contpte mappings.
>>>
>>> To ensure the arm64 is performant once implemented, this change is very
>>> careful to only call ptep_get() once per pte batch.
>>>
>>> The following microbenchmark results demonstate that there is no
>>> significant performance change after this patch. Fork is called in a
>>> tight loop in a process with 1G of populated memory and the time for the
>>> function to execute is measured. 100 iterations per run, 8 runs
>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>> performed for case where 1G memory is comprised of order-0 folios and
>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>> positive is slower, compared to baseline upon which the series is based:
>>>
>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>> | fork          |-------------------|-------------------|
>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>> |---------------|---------|---------|---------|---------|
>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>
>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>> | fork          |-------------------|-------------------|
>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>> |---------------|---------|---------|---------|---------|
>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>
>>> Tested-by: John Hubbard <jhubbard at nvidia.com>
>>> Reviewed-by: Alistair Popple <apopple at nvidia.com>
>>> Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>
>>> ---
>>>    include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>    mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>>>    2 files changed, 139 insertions(+), 33 deletions(-)
>>>
>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>> index af7639c3b0a3..db93fb81465a 100644
>>> --- a/include/linux/pgtable.h
>>> +++ b/include/linux/pgtable.h
>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>    #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>    #endif
>>>    +#ifndef pte_batch_remaining
>>> +/**
>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>> + * @pte: Page table entry for the first page.
>>> + * @addr: Address of the first page.
>>> + * @end: Batch ceiling (e.g. end of vma).
>>> + *
>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of ptes.
>>> + * In such cases, this function returns the remaining number of pages to the end
>>> + * of the current batch, as defined by addr. This can be useful when iterating
>>> + * over ptes.
>>> + *
>>> + * May be overridden by the architecture, else batch size is always 1.
>>> + */
>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long addr,
>>> +                        unsigned long end)
>>> +{
>>> +    return 1;
>>> +}
>>> +#endif
>>
>> It's a shame we now lose the optimization for all other archtiectures.
>>
>> Was there no way to have some basic batching mechanism that doesn't require arch
>> specifics?
> 
> I tried a bunch of things but ultimately the way I've done it was the only way
> to reduce the order-0 fork regression to 0.

Let me give it a churn today. I think we should really focus on having 
only a single folio_test_large() check on the fast path for order-0. And 
not even try doing batching for anything that works on bare PFNs.

Off to prototyping ... :)

-- 
Cheers,

David / dhildenb




More information about the linux-arm-kernel mailing list