[PATCH v5 5/5] mm: rmap: support batched unmapping for file large folios

Barry Song 21cnbao at gmail.com
Fri Jan 16 07:14:01 PST 2026


> >
> > I mean maybe we can skip it in try_to_unmap_one(), for example:
> >
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index 9e5bd4834481..ea1afec7c802 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -2250,6 +2250,10 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >                */
> >               if (nr_pages == folio_nr_pages(folio))
> >                       goto walk_done;
> > +             else {
> > +                     pvmw.address += PAGE_SIZE * (nr_pages - 1);
> > +                     pvmw.pte += nr_pages - 1;
> > +             }
> >               continue;
> >  walk_abort:
> >               ret = false;
>
> I am of the opinion that we should do something like this. In the internal pvmw code,
> we keep skipping ptes till the ptes are none. With my proposed uffd-fix [1], if the old
> ptes were uffd-wp armed, pte_install_uffd_wp_if_needed will convert all ptes from none
> to not none, and we will lose the batching effect. I also plan to extend support to
> anonymous folios (therefore generalizing for all types of memory) which will set a

I posted an RFC on anon folios quite some time ago [1].
It’s great to hear that you’re interested in taking this over.

[1] https://lore.kernel.org/all/20250513084620.58231-1-21cnbao@gmail.com/

> batch of ptes as swap, and the internal pvmw code won't be able to skip through the
> batch.

Interesting — I didn’t catch this issue in the RFC earlier. Back then,
we only supported nr == 1 and nr == folio_nr_pages(folio). When
nr == nr_pages, page_vma_mapped_walk() would break entirely. With
Lance’s commit ddd05742b45b08, arbitrary nr in [1, nr_pages] is now
supported, which means we have to handle all the complexity. :-)

Thanks
Barry



More information about the linux-arm-kernel mailing list