[PATCH v4 3/4] mm: Support batched unmap for lazyfree large folios during reclamation
David Hildenbrand
david at redhat.com
Tue Jul 1 09:17:21 PDT 2025
>>> + /* Nuke the page table entry. */
>>> + pteval = get_and_clear_full_ptes(mm, address, pvmw.pte, nr_pages, 0);
>>> + /*
>>> + * We clear the PTE but do not flush so potentially
>>> + * a remote CPU could still be writing to the folio.
>>> + * If the entry was previously clean then the
>>> + * architecture must guarantee that a clear->dirty
>>> + * transition on a cached TLB entry is written through
>>> + * and traps if the PTE is unmapped.
>>> + */
>>> + if (should_defer_flush(mm, flags))
>>> + set_tlb_ubc_flush_pending(mm, pteval, address, end_addr);
>>
>> When the first pte of a PTE-mapped THP has _PAGE_PROTNONE bit set
>> (by NUMA balancing), can set_tlb_ubc_flush_pending() mistakenly think that
>> it doesn't need to flush the whole range, although some ptes in the range
>> doesn't have _PAGE_PROTNONE bit set?
>
> No, then folio_pte_batch() should have returned nr < folio_nr_pages(folio).
Right, folio_pte_batch() does currently not batch across PTEs that
differ in pte_protnone().
--
Cheers,
David / dhildenb
More information about the linux-arm-kernel
mailing list