[PATCH v3 1/1] mm/rmap: fix potential out-of-bounds page table access during batched unmap

David Hildenbrand david at redhat.com
Tue Jul 1 07:03:06 PDT 2025


On 30.06.25 03:13, Lance Yang wrote:
> From: Lance Yang <lance.yang at linux.dev>
> 
> As pointed out by David[1], the batched unmap logic in try_to_unmap_one()
> may read past the end of a PTE table when a large folio's PTE mappings
> are not fully contained within a single page table.
> 
> While this scenario might be rare, an issue triggerable from userspace must
> be fixed regardless of its likelihood. This patch fixes the out-of-bounds
> access by refactoring the logic into a new helper, folio_unmap_pte_batch().
> 
> The new helper correctly calculates the safe batch size by capping the scan
> at both the VMA and PMD boundaries. To simplify the code, it also supports
> partial batching (i.e., any number of pages from 1 up to the calculated
> safe maximum), as there is no strong reason to special-case for fully
> mapped folios.
> 
> [1] https://lore.kernel.org/linux-mm/a694398c-9f03-4737-81b9-7e49c857fcbe@redhat.com
> 
> Fixes: 354dffd29575 ("mm: support batched unmap for lazyfree large folios during reclamation")
> Cc: <stable at vger.kernel.org>
> Acked-by: Barry Song <baohua at kernel.org>
> Suggested-by: David Hildenbrand <david at redhat.com>

Realized this now: This should probably be a "Reported-by:" with the 
"Closes:" and and a link to my mail.

-- 
Cheers,

David / dhildenb




More information about the linux-arm-kernel mailing list