[PATCH v4 1/1] mm/rmap: fix potential out-of-bounds page table access during batched unmap

Barry Song 21cnbao at gmail.com
Mon Jul 7 08:40:55 PDT 2025


On Mon, Jul 7, 2025 at 1:40 PM Harry Yoo <harry.yoo at oracle.com> wrote:
>
> On Tue, Jul 01, 2025 at 10:31:00PM +0800, Lance Yang wrote:
> > From: Lance Yang <lance.yang at linux.dev>
> >
> > As pointed out by David[1], the batched unmap logic in try_to_unmap_one()
> > may read past the end of a PTE table when a large folio's PTE mappings
> > are not fully contained within a single page table.
> >
> > While this scenario might be rare, an issue triggerable from userspace must
> > be fixed regardless of its likelihood. This patch fixes the out-of-bounds
> > access by refactoring the logic into a new helper, folio_unmap_pte_batch().
> >
> > The new helper correctly calculates the safe batch size by capping the scan
> > at both the VMA and PMD boundaries. To simplify the code, it also supports
> > partial batching (i.e., any number of pages from 1 up to the calculated
> > safe maximum), as there is no strong reason to special-case for fully
> > mapped folios.
> >
> > [1] https://lore.kernel.org/linux-mm/a694398c-9f03-4737-81b9-7e49c857fcbe@redhat.com
> >
> > Cc: <stable at vger.kernel.org>
> > Reported-by: David Hildenbrand <david at redhat.com>
> > Closes: https://lore.kernel.org/linux-mm/a694398c-9f03-4737-81b9-7e49c857fcbe@redhat.com
> > Fixes: 354dffd29575 ("mm: support batched unmap for lazyfree large folios during reclamation")
> > Suggested-by: Barry Song <baohua at kernel.org>
> > Acked-by: Barry Song <baohua at kernel.org>
> > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes at oracle.com>
> > Acked-by: David Hildenbrand <david at redhat.com>
> > Signed-off-by: Lance Yang <lance.yang at linux.dev>
> > ---
>
> LGTM,
> Reviewed-by: Harry Yoo <harry.yoo at oracle.com>
>
> With a minor comment below.
>
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index fb63d9256f09..1320b88fab74 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -2206,13 +2213,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >                       hugetlb_remove_rmap(folio);
> >               } else {
> >                       folio_remove_rmap_ptes(folio, subpage, nr_pages, vma);
> > -                     folio_ref_sub(folio, nr_pages - 1);
> >               }
> >               if (vma->vm_flags & VM_LOCKED)
> >                       mlock_drain_local();
> > -             folio_put(folio);
> > -             /* We have already batched the entire folio */
> > -             if (nr_pages > 1)
> > +             folio_put_refs(folio, nr_pages);
> > +
> > +             /*
> > +              * If we are sure that we batched the entire folio and cleared
> > +              * all PTEs, we can just optimize and stop right here.
> > +              */
> > +             if (nr_pages == folio_nr_pages(folio))
> >                       goto walk_done;
>
> Just a minor comment.
>
> We should probably teachhttps://lore.kernel.org/linux-mm/5db6fb4c-079d-4237-80b3-637565457f39@redhat.com/() to skip nr_pages pages,
> or just rely on next_pte: do { ... } while (pte_none(ptep_get(pvmw->pte)))
> loop in page_vma_mapped_walk() to skip those ptes?
>
> Taking different paths depending on (nr_pages == folio_nr_pages(folio))
> doesn't seem sensible.

Hi Harry,

I believe we've already had this discussion here:
https://lore.kernel.org/linux-mm/5db6fb4c-079d-4237-80b3-637565457f39@redhat.com/

My main point is that nr_pages = folio_nr_pages(folio) is the
typical/common case.
Also, modifying page_vma_mapped_walk() feels like a layering violation.

>
> >               continue;
>
> --
> Cheers,
> Harry / Hyeonggon

Thanks
Barry



More information about the linux-riscv mailing list