[bug report] kmemleak observed from blktests on latest linux-block/for-next

Yi Zhang yi.zhang at redhat.com
Sun Jun 12 00:23:36 PDT 2022


Hello
I found below kmemleak with the latest linux-block/for-next[1], pls
help check it, thanks.

[1]
75d6654eb3ab (origin/for-next) Merge branch 'for-5.19/block' into for-next


unreferenced object 0xffff88831d0fe800 (size 256):
  comm "check", pid 15430, jiffies 4306578361 (age 70450.608s)
  hex dump (first 32 bytes):
    a0 08 80 ab ff ff ff ff 00 80 76 a6 83 88 ff ff  ..........v.....
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<000000004bf8f45a>] blk_iolatency_init+0x4b/0x470
    [<00000000bdbef6c6>] blkcg_init_queue+0x122/0x4c0
    [<00000000549164e5>] __alloc_disk_node+0x23c/0x5b0
    [<0000000059f8cecc>] __blk_alloc_disk+0x31/0x60
    [<00000000a875060e>] nbd_config_put+0x6c1/0x7e0 [nbd]
    [<0000000086fab6c1>] nbd_start_device_ioctl+0x454/0x4a0 [nbd]
    [<000000009305a7c9>] configfs_write_iter+0x2b0/0x480
    [<0000000047e9815b>] new_sync_write+0x2ef/0x530
    [<0000000009113f79>] vfs_write+0x626/0x910
    [<00000000ef2d7042>] ksys_write+0xf9/0x1d0
    [<00000000ca06addd>] do_syscall_64+0x5c/0x80
    [<00000000e1ffe4b5>]
entry_SYSCALL_64_after_hwframe+0x46/0xb0unreferenced object
0xffff88818f43fe00 (size 256):
  comm "kworker/u32:13", pid 53617, jiffies 4370965500 (age 6066.292s)
  hex dump (first 32 bytes):
    a0 08 80 ab ff ff ff ff c0 62 c1 0d 81 88 ff ff  .........b......
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<000000004bf8f45a>] blk_iolatency_init+0x4b/0x470
    [<00000000bdbef6c6>] blkcg_init_queue+0x122/0x4c0
    [<00000000549164e5>] __alloc_disk_node+0x23c/0x5b0
    [<0000000059f8cecc>] __blk_alloc_disk+0x31/0x60
    [<0000000031ca7691>] nvme_mpath_alloc_disk+0x28a/0x8a0 [nvme_core]
    [<000000002038acbe>] nvme_alloc_ns_head+0x40c/0x740 [nvme_core]
    [<00000000e54cea22>] nvme_init_ns_head+0x4a3/0xa40 [nvme_core]
    [<000000007694f30a>] nvme_alloc_ns+0x3c7/0x1690 [nvme_core]
    [<0000000085ede1e2>] nvme_validate_or_alloc_ns+0x240/0x400 [nvme_core]
    [<000000001de40492>] nvme_scan_ns_list+0x20b/0x550 [nvme_core]
    [<00000000e799d365>] nvme_scan_work+0x2d2/0x760 [nvme_core]
    [<000000005b788977>] process_one_work+0x8d4/0x14d0
    [<00000000c452e193>] worker_thread+0x5ac/0xec0
    [<000000005065b8e4>] kthread+0x2a7/0x350
    [<00000000fe3dc1db>] ret_from_fork+0x22/0x30
unreferenced object 0xffff888720279c00 (size 256):
  comm "kworker/u32:2", pid 62305, jiffies 4370965926 (age 6065.866s)
  hex dump (first 32 bytes):
    a0 08 80 ab ff ff ff ff 58 8c b0 88 85 88 ff ff  ........X.......
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<000000004bf8f45a>] blk_iolatency_init+0x4b/0x470
    [<00000000bdbef6c6>] blkcg_init_queue+0x122/0x4c0
    [<00000000549164e5>] __alloc_disk_node+0x23c/0x5b0
    [<0000000059f8cecc>] __blk_alloc_disk+0x31/0x60
    [<0000000031ca7691>] nvme_mpath_alloc_disk+0x28a/0x8a0 [nvme_core]
    [<000000002038acbe>] nvme_alloc_ns_head+0x40c/0x740 [nvme_core]
    [<00000000e54cea22>] nvme_init_ns_head+0x4a3/0xa40 [nvme_core]
    [<000000007694f30a>] nvme_alloc_ns+0x3c7/0x1690 [nvme_core]
    [<0000000085ede1e2>] nvme_validate_or_alloc_ns+0x240/0x400 [nvme_core]
    [<000000001de40492>] nvme_scan_ns_list+0x20b/0x550 [nvme_core]
    [<00000000e799d365>] nvme_scan_work+0x2d2/0x760 [nvme_core]
    [<000000005b788977>] process_one_work+0x8d4/0x14d0
    [<00000000c452e193>] worker_thread+0x5ac/0xec0
    [<000000005065b8e4>] kthread+0x2a7/0x350
    [<00000000fe3dc1db>] ret_from_fork+0x22/0x30
unreferenced object 0xffff888163681c00 (size 256):
  comm "kworker/u32:13", pid 53617, jiffies 4370966347 (age 6065.585s)
  hex dump (first 32 bytes):
    a0 08 80 ab ff ff ff ff 60 31 b2 9c 82 88 ff ff  ........`1......
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<000000004bf8f45a>] blk_iolatency_init+0x4b/0x470
    [<00000000bdbef6c6>] blkcg_init_queue+0x122/0x4c0
    [<00000000549164e5>] __alloc_disk_node+0x23c/0x5b0
    [<0000000059f8cecc>] __blk_alloc_disk+0x31/0x60
    [<0000000031ca7691>] nvme_mpath_alloc_disk+0x28a/0x8a0 [nvme_core]
    [<000000002038acbe>] nvme_alloc_ns_head+0x40c/0x740 [nvme_core]
    [<00000000e54cea22>] nvme_init_ns_head+0x4a3/0xa40 [nvme_core]
    [<000000007694f30a>] nvme_alloc_ns+0x3c7/0x1690 [nvme_core]
    [<0000000085ede1e2>] nvme_validate_or_alloc_ns+0x240/0x400 [nvme_core]
    [<000000001de40492>] nvme_scan_ns_list+0x20b/0x550 [nvme_core]
    [<00000000e799d365>] nvme_scan_work+0x2d2/0x760 [nvme_core]
    [<000000005b788977>] process_one_work+0x8d4/0x14d0
    [<00000000c452e193>] worker_thread+0x5ac/0xec0
    [<000000005065b8e4>] kthread+0x2a7/0x350
    [<00000000fe3dc1db>] ret_from_fork+0x22/0x30
-- 
Best Regards,
  Yi Zhang




More information about the Linux-nvme mailing list