[PATCH v2 1/1] mm/rmap: fix potential out-of-bounds page table access during batched unmap
Barry Song
21cnbao at gmail.com
Thu Jun 26 23:52:43 PDT 2025
On Fri, Jun 27, 2025 at 6:23 PM Lance Yang <ioworker0 at gmail.com> wrote:
>
> From: Lance Yang <lance.yang at linux.dev>
>
> As pointed out by David[1], the batched unmap logic in try_to_unmap_one()
> can read past the end of a PTE table if a large folio is mapped starting at
> the last entry of that table. It would be quite rare in practice, as
> MADV_FREE typically splits the large folio ;)
>
> So let's fix the potential out-of-bounds read by refactoring the logic into
> a new helper, folio_unmap_pte_batch().
>
> The new helper now correctly calculates the safe number of pages to scan by
> limiting the operation to the boundaries of the current VMA and the PTE
> table.
>
> In addition, the "all-or-nothing" batching restriction is removed to
> support partial batches. The reference counting is also cleaned up to use
> folio_put_refs().
>
> [1] https://lore.kernel.org/linux-mm/a694398c-9f03-4737-81b9-7e49c857fcbe@redhat.com
>
What about ?
As pointed out by David[1], the batched unmap logic in try_to_unmap_one()
may read past the end of a PTE table when a large folio spans across two PMDs,
particularly after being remapped with mremap(). This patch fixes the
potential out-of-bounds access by capping the batch at vm_end and the PMD
boundary.
It also refactors the logic into a new helper, folio_unmap_pte_batch(),
which supports batching between 1 and folio_nr_pages. This improves code
clarity. Note that such cases are rare in practice, as MADV_FREE typically
splits large folios.
Thanks
Barry
More information about the linux-arm-kernel
mailing list