[bug report] kmemleak observed during blktests nvme/fc
Yi Zhang
yi.zhang at redhat.com
Thu Jan 15 01:24:58 PST 2026
Hi Justin and Chaitanya
It turns out that the kmemleak was caused by nvme-loop. It was
observed during the stress nvme loop/tcp/fc[1] test, but the kmemleak
log was reported during the nvme/fc test. That's why I didn't
reproduce it with the stress nvme/fc test before.
[1]
nvme_trtype=loop ./check nvme/
nvme_trtype=tcp ./check nvme/
nvme_trtype=fc ./check nvme/
unreferenced object 0xffff8881295fd000 (size 1024):
comm "nvme", pid 101335, jiffies 4299282670
hex dump (first 32 bytes):
00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
ff ff ff ff ff ff ff ff e0 3c 57 af ff ff ff ff .........<W.....
backtrace (crc 414bcfcd):
__kmalloc_cache_node_noprof+0x5f9/0x840
blk_mq_alloc_hctx+0x52/0x810
blk_mq_alloc_and_init_hctx+0x5b9/0x840
__blk_mq_realloc_hw_ctxs+0x20a/0x610
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
nvme_loop_configure_admin_queue+0xdf/0x2d0 [nvme_loop]
nvme_loop_create_ctrl+0x428/0xb13 [nvme_loop]
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff8881c24db660 (size 8):
comm "nvme", pid 101335, jiffies 4299282670
hex dump (first 8 bytes):
ff ff 00 00 00 00 00 00 ........
backtrace (crc b47d4cd6):
__kmalloc_node_noprof+0x6ab/0x970
alloc_cpumask_var_node+0x56/0xb0
blk_mq_alloc_hctx+0x74/0x810
blk_mq_alloc_and_init_hctx+0x5b9/0x840
__blk_mq_realloc_hw_ctxs+0x20a/0x610
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
nvme_loop_configure_admin_queue+0xdf/0x2d0 [nvme_loop]
nvme_loop_create_ctrl+0x428/0xb13 [nvme_loop]
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff8882752cd300 (size 128):
comm "nvme", pid 101335, jiffies 4299282670
hex dump (first 32 bytes):
00 bf f0 fb ff e8 ff ff 00 bf 30 fc ff e8 ff ff ..........0.....
00 bf 70 fc ff e8 ff ff 00 bf b0 fc ff e8 ff ff ..p.............
backtrace (crc caffc16d):
__kmalloc_node_noprof+0x6ab/0x970
blk_mq_alloc_hctx+0x43a/0x810
blk_mq_alloc_and_init_hctx+0x5b9/0x840
__blk_mq_realloc_hw_ctxs+0x20a/0x610
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
nvme_loop_configure_admin_queue+0xdf/0x2d0 [nvme_loop]
nvme_loop_create_ctrl+0x428/0xb13 [nvme_loop]
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff88827d5d7800 (size 512):
comm "nvme", pid 101335, jiffies 4299282670
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace (crc 93cf34af):
__kvmalloc_node_noprof+0x814/0xb30
sbitmap_init_node+0x184/0x730
blk_mq_alloc_hctx+0x4b3/0x810
blk_mq_alloc_and_init_hctx+0x5b9/0x840
__blk_mq_realloc_hw_ctxs+0x20a/0x610
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
nvme_loop_configure_admin_queue+0xdf/0x2d0 [nvme_loop]
nvme_loop_create_ctrl+0x428/0xb13 [nvme_loop]
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
On Sat, Dec 27, 2025 at 8:10 PM Yi Zhang <yi.zhang at redhat.com> wrote:
>
> > > Can you try following ? FYI : - Potential fix, only compile tested.
> > >
> > > From b3c2e350ae741b18c04abe489dcf9d325537c01c Mon Sep 17 00:00:00 2001
> > > From: Chaitanya Kulkarni <ckulkarnilinux at gmail.com>
> > > Date: Sun, 14 Dec 2025 19:29:24 -0800
> > > Subject: [PATCH COMPILE TESTED ONLY] nvme-fc: release admin tagset if
> > > init fails
> > >
> > > nvme_fabrics creates an NVMe/FC controller in following path:
> > >
> > > nvmf_dev_write()
> > > -> nvmf_create_ctrl()
> > > -> nvme_fc_create_ctrl()
> > > -> nvme_fc_init_ctrl()
> > >
> > > Check ctrl->ctrl.admin_tagset in the fail_ctrl path and call
> > > nvme_remove_admin_tag_set() to release the resources.
> > >
> > > Signed-off-by: Chaitanya Kulkarni <ckulkarnilinux at gmail.com>
> > > ---
> > > drivers/nvme/host/fc.c | 2 ++
> > > 1 file changed, 2 insertions(+)
> > >
> > > diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> > > index bc455fa98246..6948de3f438a 100644
> > > --- a/drivers/nvme/host/fc.c
> > > +++ b/drivers/nvme/host/fc.c
> > > @@ -3587,6 +3587,8 @@ nvme_fc_init_ctrl(struct device *dev, struct
> > > nvmf_ctrl_options *opts,
> > >
> > > ctrl->ctrl.opts = NULL;
> > >
> > > + if (ctrl->ctrl.admin_tagset)
> > > + nvme_remove_admin_tag_set(&ctrl->ctrl);
> > > /* initiate nvme ctrl ref counting teardown */
> > > nvme_uninit_ctrl(&ctrl->ctrl);
> > >
> > did you get a chance to try this ?
>
> Hi Chaitanya
>
> Sorry for the late response, I tried to reproduce this issue recently
> but with no luck to reproduce it again.
> And during the stress blktests nvme/fc test, I reproduced several panic issue.
> I will report it later after I get more info.
>
>
> >
> > -ck
> >
>
>
> --
> Best Regards,
> Yi Zhang
--
Best Regards,
Yi Zhang
More information about the Linux-nvme
mailing list