[PATCH v9 02/10] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap()

Jiri Olsa olsajiri at gmail.com
Sun Jan 14 12:55:15 PST 2024


On Sun, Jan 14, 2024 at 06:33:56PM +0100, David Hildenbrand wrote:
> On 13.01.24 23:42, Jiri Olsa wrote:
> > On Thu, Dec 07, 2023 at 04:12:03PM +0000, Ryan Roberts wrote:
> > > In preparation for supporting anonymous multi-size THP, improve
> > > folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be
> > > passed to it. In this case, all contained pages are accounted using the
> > > order-0 folio (or base page) scheme.
> > > 
> > > Reviewed-by: Yu Zhao <yuzhao at google.com>
> > > Reviewed-by: Yin Fengwei <fengwei.yin at intel.com>
> > > Reviewed-by: David Hildenbrand <david at redhat.com>
> > > Reviewed-by: Barry Song <v-songbaohua at oppo.com>
> > > Tested-by: Kefeng Wang <wangkefeng.wang at huawei.com>
> > > Tested-by: John Hubbard <jhubbard at nvidia.com>
> > > Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>
> > > ---
> > >   mm/rmap.c | 28 ++++++++++++++++++++--------
> > >   1 file changed, 20 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/mm/rmap.c b/mm/rmap.c
> > > index 2a1e45e6419f..846fc79f3ca9 100644
> > > --- a/mm/rmap.c
> > > +++ b/mm/rmap.c
> > > @@ -1335,32 +1335,44 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
> > >    * This means the inc-and-test can be bypassed.
> > >    * The folio does not have to be locked.
> > >    *
> > > - * If the folio is large, it is accounted as a THP.  As the folio
> > > + * If the folio is pmd-mappable, it is accounted as a THP.  As the folio
> > >    * is new, it's assumed to be mapped exclusively by a single process.
> > >    */
> > >   void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
> > >   		unsigned long address)
> > >   {
> > > -	int nr;
> > > +	int nr = folio_nr_pages(folio);
> > > -	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
> > > +	VM_BUG_ON_VMA(address < vma->vm_start ||
> > > +			address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
> > 
> > hi,
> > I'm hitting this bug (console output below) with adding uprobe
> > on simple program like:
> > 
> >    $ cat up.c
> >    int main(void)
> >    {
> >       return 0;
> >    }
> > 
> >    # bpftrace -e 'uprobe:/home/jolsa/up:_start {}'
> > 
> >    $ ./up
> > 
> > it's on top of current linus tree master:
> >    052d534373b7 Merge tag 'exfat-for-6.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/linkinjeon/exfat
> > 
> > before this patch it seems to work, I can send my .config if needed
> 
> bpf only inserts a small folio, so no magic there.
> 
> It was:
> 	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
> And now it is
> 	VM_BUG_ON_VMA(address < vma->vm_start || address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
> 
> I think this change is sane. As long as the address is aligned to full pages
> (which it better should be)
> 
> Staring at uprobe_write_opcode, likely vaddr isn't aligned ...
> 
> Likely (hopefully) that is not an issue for __folio_set_anon(), because linear_page_index()
> will mask these bits off.
> 
> 
> Would the following change fix it for you?

great, that fixes it for me, you can add my

Tested-by: Jiri Olsa <jolsa at kernel.org>

thanks,
jirka

> 
> From c640a8363e47bc96965a35115a040b5f876c4320 Mon Sep 17 00:00:00 2001
> From: David Hildenbrand <david at redhat.com>
> Date: Sun, 14 Jan 2024 18:32:57 +0100
> Subject: [PATCH] tmp
> 
> Signed-off-by: David Hildenbrand <david at redhat.com>
> ---
>  kernel/events/uprobes.c | 2 +-
>  mm/rmap.c               | 1 +
>  2 files changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index 485bb0389b488..929e98c629652 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -537,7 +537,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm,
>  		}
>  	}
> -	ret = __replace_page(vma, vaddr, old_page, new_page);
> +	ret = __replace_page(vma, vaddr & PAGE_MASK, old_page, new_page);
>  	if (new_page)
>  		put_page(new_page);
>  put_old:
> diff --git a/mm/rmap.c b/mm/rmap.c
> index f5d43edad529a..a903db4df6b97 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1408,6 +1408,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>  {
>  	int nr = folio_nr_pages(folio);
> +	VM_WARN_ON_FOLIO(!IS_ALIGNED(address, PAGE_SIZE), folio);
>  	VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio);
>  	VM_BUG_ON_VMA(address < vma->vm_start ||
>  			address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
> -- 
> 2.43.0
> 
> 
> 
> -- 
> Cheers,
> 
> David / dhildenb
> 



More information about the linux-arm-kernel mailing list