[PATCH v7 00/70] Introducing the Maple Tree

Yu Zhao yuzhao at google.com
Sat Apr 16 12:30:01 PDT 2022


On Sat, Apr 16, 2022 at 9:19 AM Liam Howlett <liam.howlett at oracle.com> wrote:
>

<snipped>

> How did you hit this issue?  Just on boot?

I was hoping this is known to you or you have something I can verify for you.

Anyway, this was triggered by the Chrome browser. CompositorTileWorker
is the rendering process of Chrome [1]. With some luck, you might be
able to reproduce the problem by using Chrome. Chrome unit tests [2]
would be a better option, but it'd take some time to set up.

[1] https://source.chromium.org/chromium/chromium/src/+/main:content/renderer/categorized_worker_pool.cc;drc=0ac63f839b806e5e8823c5eebd6ca2db3b8f178e;l=201
[2] https://chromium.googlesource.com/chromium/src/+/HEAD/docs/testing/testing_in_chromium.md

> >   ==================================================================
> >   BUG: KASAN: invalid-access in mas_destroy+0x10a4/0x126c
> >   Read of size 8 at addr 7bffff8015c1a110 by task CompositorTileW/9966
> >   Pointer tag: [7b], memory tag: [fe]
> >
> >   CPU: 1 PID: 9966 Comm: CompositorTileW Not tainted 5.18.0-rc2-mm1-lockdep+ #2
> >   Call trace:
> >    dump_backtrace+0x1a0/0x200
> >    show_stack+0x24/0x30
> >    dump_stack_lvl+0x7c/0xa0
> >    print_report+0x15c/0x524
> >    kasan_report+0x84/0xb4
> >    kasan_tag_mismatch+0x28/0x3c
> >    __hwasan_tag_mismatch+0x30/0x60
> >    mas_destroy+0x10a4/0x126c
> >    mas_nomem+0x40/0xf4
> >    mas_store_gfp+0x9c/0xfc
> >    do_mas_align_munmap+0x344/0x688
> >    do_mas_munmap+0xf8/0x118
> >    __vm_munmap+0x154/0x1e0
> >    __arm64_sys_munmap+0x44/0x54
> >    el0_svc_common+0xfc/0x1cc
> >    do_el0_svc_compat+0x38/0x5c
> >    el0_svc_compat+0x68/0x118
> >    el0t_32_sync_handler+0xc0/0xf0
> >    el0t_32_sync+0x190/0x194
> >
> >   Allocated by task 9966:
> >    kasan_set_track+0x4c/0x7c
> >    __kasan_slab_alloc+0x84/0xa8
> >    kmem_cache_alloc_bulk+0x300/0x408
> >    mas_alloc_nodes+0x188/0x268
> >    mas_nomem+0x88/0xf4
> >    mas_store_gfp+0x9c/0xfc
> >    do_mas_align_munmap+0x344/0x688
> >    do_mas_munmap+0xf8/0x118
> >    __vm_munmap+0x154/0x1e0
> >    __arm64_sys_munmap+0x44/0x54
> >    el0_svc_common+0xfc/0x1cc
> >    do_el0_svc_compat+0x38/0x5c
> >    el0_svc_compat+0x68/0x118
> >    el0t_32_sync_handler+0xc0/0xf0
> >    el0t_32_sync+0x190/0x194
> >
> >   Freed by task 9966:
> >    kasan_set_track+0x4c/0x7c
> >    kasan_set_free_info+0x2c/0x38
> >    ____kasan_slab_free+0x13c/0x184
> >    __kasan_slab_free+0x14/0x24
> >    slab_free_freelist_hook+0x100/0x1ac
> >    kmem_cache_free_bulk+0x230/0x3b0
> >    mas_destroy+0x10d4/0x126c
> >    mas_nomem+0x40/0xf4
> >    mas_store_gfp+0x9c/0xfc
> >    do_mas_align_munmap+0x344/0x688
> >    do_mas_munmap+0xf8/0x118
> >    __vm_munmap+0x154/0x1e0
> >    __arm64_sys_munmap+0x44/0x54
> >    el0_svc_common+0xfc/0x1cc
> >    do_el0_svc_compat+0x38/0x5c
> >    el0_svc_compat+0x68/0x118
> >    el0t_32_sync_handler+0xc0/0xf0
> >    el0t_32_sync+0x190/0x194
> >
> >   The buggy address belongs to the object at ffffff8015c1a100
> >    which belongs to the cache maple_node of size 256
> >   The buggy address is located 16 bytes inside of
> >    256-byte region [ffffff8015c1a100, ffffff8015c1a200)
> >
> >   The buggy address belongs to the physical page:
> >   page:fffffffe00570600 refcount:1 mapcount:0 mapping:0000000000000000
> > index:0xa8ffff8015c1ad00 pfn:0x95c18
> >   head:fffffffe00570600 order:3 compound_mapcount:0 compound_pincount:0
> >   flags: 0x10200(slab|head|zone=0|kasantag=0x0)
> >   raw: 0000000000010200 6cffff8080030850 fffffffe003ec608 dbffff8080016280
> >   raw: a8ffff8015c1ad00 000000000020001e 00000001ffffffff 0000000000000000
> >   page dumped because: kasan: bad access detected
> >
> >   Memory state around the buggy address:
> >    ffffff8015c19f00: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> >    ffffff8015c1a000: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> >   >ffffff8015c1a100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> >                         ^
> >    ffffff8015c1a200: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> >    ffffff8015c1a300: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> >   ==================================================================



More information about the maple-tree mailing list