[PATCH v2 1/3] arm64: mm: support batch clearing of the young flag for large folios
Baolin Wang
baolin.wang at linux.alibaba.com
Thu Dec 18 17:00:34 PST 2025
On 2025/12/18 20:20, Ryan Roberts wrote:
> On 18/12/2025 07:15, Baolin Wang wrote:
>>
>>
>> On 2025/12/17 23:43, Ryan Roberts wrote:
>>> Sorry I'm a bit late to the party...
>>
>> Never mind. It's not late and comments are always welcome :)
>>
>>> On 11/12/2025 08:16, Baolin Wang wrote:
>>>> Currently, contpte_ptep_test_and_clear_young() and
>>>> contpte_ptep_clear_flush_young()
>>>> only clear the young flag and flush TLBs for PTEs within the contiguous range.
>>>> To support batch PTE operations for other sized large folios in the following
>>>> patches, adding a new parameter to specify the number of PTEs.
>>>>
>>>> While we are at it, rename the functions to maintain consistency with other
>>>> contpte_*() functions.
>>>>
>>>> Signed-off-by: Baolin Wang <baolin.wang at linux.alibaba.com>
>>>> ---
>>>> arch/arm64/include/asm/pgtable.h | 12 ++++-----
>>>> arch/arm64/mm/contpte.c | 44 ++++++++++++++++++++++----------
>>>> 2 files changed, 37 insertions(+), 19 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>>> index 0944e296dd4a..e03034683156 100644
>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>> @@ -1679,10 +1679,10 @@ extern void contpte_clear_full_ptes(struct mm_struct
>>>> *mm, unsigned long addr,
>>>> extern pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>>>> unsigned long addr, pte_t *ptep,
>>>> unsigned int nr, int full);
>>>> -extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>>>> - unsigned long addr, pte_t *ptep);
>>>> -extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
>>>> - unsigned long addr, pte_t *ptep);
>>>> +extern int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
>>>> + unsigned long addr, pte_t *ptep, unsigned int nr);
>>>> +extern int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
>>>> + unsigned long addr, pte_t *ptep, unsigned int nr);
>>>
>>> The "contpte_" functions are intended to be private to the arm64 arch and should
>>> be exposed via the generic APIs. But I don't see any generic batched API for
>>> this, so you're only actually able to pass CONT_PTES as nr. Perhaps you're
>>> planning to define "test_and_clear_young_ptes()" and "clear_flush_young_ptes()"
>>> in later patches?
>>
>> Right. This is a preparation patch, and will be used in patch 2.
>>
>>>> extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>>>> pte_t *ptep, unsigned int nr);
>>>> extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
>>>> @@ -1854,7 +1854,7 @@ static inline int ptep_test_and_clear_young(struct
>>>> vm_area_struct *vma,
>>>> if (likely(!pte_valid_cont(orig_pte)))
>>>> return __ptep_test_and_clear_young(vma, addr, ptep);
>>>> - return contpte_ptep_test_and_clear_young(vma, addr, ptep);
>>>> + return contpte_test_and_clear_young_ptes(vma, addr, ptep, CONT_PTES);
>>>> }
>>>> #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
>>>> @@ -1866,7 +1866,7 @@ static inline int ptep_clear_flush_young(struct
>>>> vm_area_struct *vma,
>>>> if (likely(!pte_valid_cont(orig_pte)))
>>>> return __ptep_clear_flush_young(vma, addr, ptep);
>>>> - return contpte_ptep_clear_flush_young(vma, addr, ptep);
>>>> + return contpte_clear_flush_young_ptes(vma, addr, ptep, CONT_PTES);
>>>> }
>>>> #define wrprotect_ptes wrprotect_ptes
>>>> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
>>>> index c0557945939c..19b122441be3 100644
>>>> --- a/arch/arm64/mm/contpte.c
>>>> +++ b/arch/arm64/mm/contpte.c
>>>> @@ -488,8 +488,9 @@ pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>>>> }
>>>> EXPORT_SYMBOL_GPL(contpte_get_and_clear_full_ptes);
>>>> -int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>>>> - unsigned long addr, pte_t *ptep)
>>>> +int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
>>>> + unsigned long addr, pte_t *ptep,
>>>> + unsigned int nr)
>>>> {
>>>> /*
>>>> * ptep_clear_flush_young() technically requires us to clear the access
>>>> @@ -500,39 +501,56 @@ int contpte_ptep_test_and_clear_young(struct
>>>> vm_area_struct *vma,
>>>> * having to unfold.
>>>> */
>>>> + unsigned long start = addr;
>>>
>>> Personally I wouldn't bother defining start - just reuse addr. You're
>>> incrementing start in the below loop, so it's more appropriate to call it addr
>>> anyway.
>>
>> OK.
>>
>>>> + unsigned long end = start + nr * PAGE_SIZE;
>>>> int young = 0;
>>>> int i;
>>>> - ptep = contpte_align_down(ptep);
>>>> - addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
>>>> + if (pte_cont(__ptep_get(ptep + nr - 1)))
>>>> + end = ALIGN(end, CONT_PTE_SIZE);
>>>> - for (i = 0; i < CONT_PTES; i++, ptep++, addr += PAGE_SIZE)
>>>> - young |= __ptep_test_and_clear_young(vma, addr, ptep);
>>>> + if (pte_cont(__ptep_get(ptep))) {
>>>> + start = ALIGN_DOWN(start, CONT_PTE_SIZE);
>>>> + ptep = contpte_align_down(ptep);
>>>> + }
>>>> +
>>>> + nr = (end - start) / PAGE_SIZE;
>>>> + for (i = 0; i < nr; i++, ptep++, start += PAGE_SIZE)
>>>
>>> Given you're now defining end, perhaps we don't need nr?
>>>
>>> for (; addr != end; ptep++, addr += PAGE_SIZE)
>>> young |= __ptep_test_and_clear_young(vma, addr, ptep);
>>
>> Yes, good point.
>>
>>>> + young |= __ptep_test_and_clear_young(vma, start, ptep);
>>>> return young;
>>>> }
>>>> -EXPORT_SYMBOL_GPL(contpte_ptep_test_and_clear_young);
>>>> +EXPORT_SYMBOL_GPL(contpte_test_and_clear_young_ptes);
>>>> -int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
>>>> - unsigned long addr, pte_t *ptep)
>>>> +int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
>>>> + unsigned long addr, pte_t *ptep,
>>>> + unsigned int nr)
>>>> {
>>>> int young;
>>>> - young = contpte_ptep_test_and_clear_young(vma, addr, ptep);
>>>> + young = contpte_test_and_clear_young_ptes(vma, addr, ptep, nr);
>>>> if (young) {
>>>> + unsigned long start = addr;
>>>> + unsigned long end = start + nr * PAGE_SIZE;
>>>> +
>>>> + if (pte_cont(__ptep_get(ptep + nr - 1)))
>>>> + end = ALIGN(end, CONT_PTE_SIZE);
>>>> +
>>>> + if (pte_cont(__ptep_get(ptep)))
>>>> + start = ALIGN_DOWN(start, CONT_PTE_SIZE);
>>>> +
>>>
>>> We now have this pattern of expanding contpte blocks up and down in 3 places.
>>> Perhaps create a helper?
>>
>> Sounds reasonable. How about the following helper?
>>
>> static pte_t *contpte_align_addr_ptep(unsigned long *start, unsigned long *end,
>> pte_t *ptep, unsigned int nr)
>> {
>> unsigned long end_addr = *start + nr * PAGE_SIZE;
>>
>> if (pte_cont(__ptep_get(ptep + nr - 1)))
>
> I think this is safe but calling it out to check; you're not checking that the
> pte is valid, so theoretically you could have a swap-entry here with whatever
> overlays the contiguous bit set. So then you would incorrectly extend.
>
> But I think it is safe because the expectation is that core-mm has already
> checked that the whole range is present?
Yes. They must be present PTEs that map consecutive pages of the same
large folio within a single VMA and a single page table. I will add some
comments to make this clear.
>> *end = ALIGN(end_addr, CONT_PTE_SIZE);
>>
>> if (pte_cont(__ptep_get(ptep))) {
>> *start = ALIGN_DOWN(*start, CONT_PTE_SIZE);
>> ptep = contpte_align_down(ptep);
>> }
>>
>> return ptep;
>> }
>
> Looks good.
Thanks for reviewing.
More information about the linux-arm-kernel
mailing list