[PATCH v4 02/16] mm: Batch-copy PTE ranges during fork()

David Hildenbrand david at redhat.com
Wed Dec 20 05:13:30 PST 2023


On 20.12.23 14:10, Ryan Roberts wrote:
> On 20/12/2023 13:06, David Hildenbrand wrote:
>> On 20.12.23 13:04, Ryan Roberts wrote:
>>> On 20/12/2023 11:58, David Hildenbrand wrote:
>>>> On 20.12.23 12:51, Ryan Roberts wrote:
>>>>> On 20/12/2023 11:36, David Hildenbrand wrote:
>>>>>> On 20.12.23 12:28, Ryan Roberts wrote:
>>>>>>> On 20/12/2023 10:56, David Hildenbrand wrote:
>>>>>>>> On 20.12.23 11:41, Ryan Roberts wrote:
>>>>>>>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>>>>>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>>>>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>>>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A
>>>>>>>>>>>>>>>>>>> given
>>>>>>>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>>>>>>>>> memory,
>>>>>>>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then
>>>>>>>>>>>>>>>>>>> write-protected in
>>>>>>>>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects()
>>>>>>>>>>>>>>>>>>> and is
>>>>>>>>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The primary motivation for this change is to reduce the number
>>>>>>>>>>>>>>>>>>> of tlb
>>>>>>>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform
>>>>>>>>>>>>>>>>>>> during
>>>>>>>>>>>>>>>>>>> fork, as it is about to add transparent support for the
>>>>>>>>>>>>>>>>>>> "contiguous
>>>>>>>>>>>>>>>>>>> bit"
>>>>>>>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the
>>>>>>>>>>>>>>>>>>> backend
>>>>>>>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is
>>>>>>>>>>>>>>>>>>> expensive,
>>>>>>>>>>>>>>>>>>> when all ptes in the range are being write-protected.
>>>>>>>>>>>>>>>>>>> Similarly, by
>>>>>>>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once
>>>>>>>>>>>>>>>>>>> they
>>>>>>>>>>>>>>>>>>> are all populated - they can be initially populated as a
>>>>>>>>>>>>>>>>>>> contiguous
>>>>>>>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This code is very performance sensitive, and a significant
>>>>>>>>>>>>>>>>>>> amount of
>>>>>>>>>>>>>>>>>>> effort has been put into not regressing performance for the
>>>>>>>>>>>>>>>>>>> order-0
>>>>>>>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile
>>>>>>>>>>>>>>>>>>> constant 1,
>>>>>>>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are
>>>>>>>>>>>>>>>>>>> added
>>>>>>>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>>>>>>>>> change
>>>>>>>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this
>>>>>>>>>>>>>>>>>>> change is
>>>>>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>>>>>>>>> significant performance change after this patch. Fork is called
>>>>>>>>>>>>>>>>>>> in a
>>>>>>>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time
>>>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal).
>>>>>>>>>>>>>>>>>>> Tests
>>>>>>>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0
>>>>>>>>>>>>>>>>>>> folios and
>>>>>>>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is
>>>>>>>>>>>>>>>>>>> faster,
>>>>>>>>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>>>>>>>>> based:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard at nvidia.com>
>>>>>>>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple at nvidia.com>
>>>>>>>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>
>>>>>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>>>>>            include/linux/pgtable.h | 80
>>>>>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>>>>>>>            mm/memory.c             | 92
>>>>>>>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>>>>>>>            2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>>>>>>>            #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>>>>>>>>            #endif
>>>>>>>>>>>>>>>>>>>            +#ifndef pte_batch_remaining
>>>>>>>>>>>>>>>>>>> +/**
>>>>>>>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a
>>>>>>>>>>>>>>>>>>> contiguous
>>>>>>>>>>>>>>>>>>> batch of
>>>>>>>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>>>>>>>>> pages to
>>>>>>>>>>>>>>>>>>> the end
>>>>>>>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful
>>>>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>>>>> iterating
>>>>>>>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is
>>>>>>>>>>>>>>>>>>> always 1.
>>>>>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte,
>>>>>>>>>>>>>>>>>>> unsigned
>>>>>>>>>>>>>>>>>>> long
>>>>>>>>>>>>>>>>>>> addr,
>>>>>>>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> It's a shame we now lose the optimization for all other
>>>>>>>>>>>>>>>>>> archtiectures.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Was there no way to have some basic batching mechanism that
>>>>>>>>>>>>>>>>>> doesn't
>>>>>>>>>>>>>>>>>> require
>>>>>>>>>>>>>>>>>> arch
>>>>>>>>>>>>>>>>>> specifics?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it
>>>>>>>>>>>>>>>>> was the
>>>>>>>>>>>>>>>>> only
>>>>>>>>>>>>>>>>> way
>>>>>>>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first
>>>>>>>>>>>>>>>>> attempt
>>>>>>>>>>>>>>>>> at an
>>>>>>>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time
>>>>>>>>>>>>>>>>> constant 1
>>>>>>>>>>>>>>>>> still
>>>>>>>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can
>>>>>>>>>>>>>>>>> get
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> regression under 4% with this. Further along the series I spent a
>>>>>>>>>>>>>>>>> lot of
>>>>>>>>>>>>>>>>> time
>>>>>>>>>>>>>>>>> having to fiddle with the arm64 implementation; every
>>>>>>>>>>>>>>>>> conditional and
>>>>>>>>>>>>>>>>> every
>>>>>>>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>>>>>>>> little in
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere
>>>>>>>>>>>>>>>>> Altra
>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>> Apple
>>>>>>>>>>>>>>>>> M2).
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>>>>>>>> benefit to
>>>>>>>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>>>>>>>> prefer to
>>>>>>>>>>>>>>>>> play
>>>>>>>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>>>>>>>> previously
>>>>>>>>>>>>>>>>> suggested.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I just hacked something up, on top of my beloved rmap
>>>>>>>>>>>>>>>> cleanup/batching
>>>>>>>>>>>>>>>> series. I
>>>>>>>>>>>>>>>> implemented very generic and simple batching for large folios
>>>>>>>>>>>>>>>> (all PTE
>>>>>>>>>>>>>>>> bits
>>>>>>>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R)
>>>>>>>>>>>>>>>> Xeon(R)
>>>>>>>>>>>>>>>> Silver
>>>>>>>>>>>>>>>> 4210R CPU.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>>>>>>>
>>>>>>>>>>>>> I've just been trying to compile and noticed this. Will take a look at
>>>>>>>>>>>>> your
>>>>>>>>>>>>> update.
>>>>>>>>>>>>>
>>>>>>>>>>>>> But upon review, I've noticed the part that I think makes this
>>>>>>>>>>>>> difficult
>>>>>>>>>>>>> for
>>>>>>>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for
>>>>>>>>>>>>> every
>>>>>>>>>>>>> pte in
>>>>>>>>>>>>> the batch. While this is functionally correct, once arm64 has the
>>>>>>>>>>>>> contpte
>>>>>>>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>>>>>>>> order to
>>>>>>>>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>>>>>>>>> wealking
>>>>>>>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>>>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>>>>>>>>> function; this allows the core-mm to skip to the end of the contpte
>>>>>>>>>>>>> block and
>>>>>>>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16
>>>>>>>>>>>>> READ_ONCE()s
>>>>>>>>>>>>> instead
>>>>>>>>>>>>> of 256.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would
>>>>>>>>>>>>> avoid
>>>>>>>>>>>>> the
>>>>>>>>>>>>> bit
>>>>>>>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that
>>>>>>>>>>>>> function
>>>>>>>>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>>>>>>>>> patch 3
>>>>>>>>>>>>> in my series).
>>>>>>>>>>>>>
>>>>>>>>>>>>> I guess you are going to say that we should combine both approaches, so
>>>>>>>>>>>>> that
>>>>>>>>>>>>> your batching loop can skip forward an arch-provided number of ptes?
>>>>>>>>>>>>> That
>>>>>>>>>>>>> would
>>>>>>>>>>>>> certainly work, but feels like an orthogonal change to what I'm
>>>>>>>>>>>>> trying to
>>>>>>>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>>>>>>>
>>>>>>>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>>>>>>>
>>>>>>>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during
>>>>>>>>>>>> fork()"
>>>>>>>>>>>> and it
>>>>>>>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>>>>>>>
>>>>>>>>>>> How do you want to handle your patches? Do you want to clean them up and
>>>>>>>>>>> I'll
>>>>>>>>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>>>>>>>>
>>>>>>>>>> Whatever you prefer, it was mostly a quick prototype to see if we can
>>>>>>>>>> achieve
>>>>>>>>>> decent performance.
>>>>>>>>>
>>>>>>>>> I'm about to run it on Altra and M2. But I assume it will show similar
>>>>>>>>> results.
>>>>>>>
>>>>>>> OK results in, not looking great, which aligns with my previous experience.
>>>>>>> That
>>>>>>> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
>>>>>>> perhaps these results are not valid...
>>>>>>
>>>>>> I didn't see that so far on x86, maybe related to the PFN fixup?
>>>>>
>>>>> All I've done is define PFN_PTE_SHIFT for arm64 on top of your latest patch:
>>>>>
>>>>> diff --git a/arch/arm64/include/asm/pgtable.h
>>>>> b/arch/arm64/include/asm/pgtable.h
>>>>> index b19a8aee684c..9eb0fd693df9 100644
>>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>>> @@ -359,6 +359,8 @@ static inline void set_ptes(struct mm_struct *mm,
>>>>>     }
>>>>>     #define set_ptes set_ptes
>>>>>     +#define PFN_PTE_SHIFT          PAGE_SHIFT
>>>>> +
>>>>>     /*
>>>>>      * Huge pte definitions.
>>>>>      */
>>>>>
>>>>>
>>>>> As an aside, I think there is a bug in arm64's set_ptes() for PA > 48-bit
>>>>> case. But that won't affect this.
>>>>>
>>>>>
>>>>> With VM_DEBUG on, this is the first warning I see during boot:
>>>>>
>>>>>
>>>>> [    0.278110] page:00000000c7ced4e8 refcount:12 mapcount:0
>>>>> mapping:00000000b2f9739b index:0x1a8 pfn:0x1bff30
>>>>> [    0.278742] head:00000000c7ced4e8 order:2 entire_mapcount:0
>>>>> nr_pages_mapped:2 pincount:0
>>>>
>>>> ^ Ah, you are running with mTHP. Let me play with that.
>>>
>>> Err... Its in mm-unstable, but I'm not enabling any sizes. It should only be set
>>> up for PMD-sized THP.
>>>
>>> I am using XFS though, so I imagine its a file folio.
>>>
>>> I've rebased your rmap cleanup and fork batching to the version of mm-unstable
>>> that I was doing all my other testing with so I could compare numbers. But its
>>> not very old (perhaps a week). All the patches applied without any conflict.
>>
>>
>> It would also be interesting to know if the compiler on arm64 decides to do
>> something stupid: like not inline wrprotect_ptes().
>>
>> Because with an effective unlikely(folio_test_large(folio)) we shouldn't see
>> that much overhead.
>>
> 
> What version of gcc are you using? I must confess I'm using the Ubuntu 20.04
> default version:
> 
> aarch64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
> 
> Perhaps I should grab something a bit newer?
> 

gcc version 13.2.1 20231011 (Red Hat 13.2.1-4) (GCC)

 From Fedora 38. So "a bit" newer :P

-- 
Cheers,

David / dhildenb




More information about the linux-arm-kernel mailing list