blktests failures with v6.19-rc1 kernel

Shinichiro Kawasaki shinichiro.kawasaki at wdc.com
Sun Dec 21 03:33:15 PST 2025


Hi all,

I ran the latest blktests (git hash: b1b99d1a36c2) with the v6.19-rc1 kernel.
I observed 5 failures listed below. Comparing with the previous report with the
v6.18 kernel [1], 1 failure was fixed (nvme/041) by Justin (Thanks!), and 3
failures are newly observed (nvme/033, nvme/058 and srp).

[1] https://lore.kernel.org/linux-block/5b023828-b03d-4351-b6f0-e13d0df8c446@wdc.com/


List of failures
================
#1: nvme/005,063 (tcp transport)
#2: nvme/033 (new)
#3: nvme/058 (fc transport) (new)
#4: nbd/002
#5: srp (rxe driver) (new)


Failure description
===================

#1: nvme/005,063 (tcp transport)

     The test case nvme/005 and 063 fail for tcp transport due to the lockdep
     WARN related to the three locks q->q_usage_counter, q->elevator_lock and
     set->srcu. Refer to the nvme/063 failure report for v6.16-rc1 kernel [2].

     [2] https://lore.kernel.org/linux-block/4fdm37so3o4xricdgfosgmohn63aa7wj3ua4e5vpihoamwg3ui@fq42f5q5t5ic/

#2: nvme/033

     The test case nvme/033 fails due to KASAN slab-out-of-bounds [3]. I bisected
     and identified the trigger commit in v6.19-rc1 tag. Then I posted a fix
     candidate patch [4].

     [4] https://lore.kernel.org/linux-nvme/20251221073714.398747-1-shinichiro.kawasaki@wdc.com/

#3: nvme/058

     When the test case nvme/058 is repeated for fc transport about 50 times, it
     fails due to the WARN in blk_mq_unquiesce_queue() [5]. I found this failure
     during trial runs for v6.19-rc1 kernel, but it was observed with v6.18
     kernel also. This failure looks rare and I guess it has been existing for a
     while. Further debug will be appreciated.

#4: nbd/002

     The test case nbd/002 fails due to the lockdep WARN related to
     mm->mmap_lock, sk_lock-AF_INET6 and fs_reclaim. Refer to the nbd/002 failure
     report for v6.18-rc1 kernel [6].

     [6] https://lore.kernel.org/linux-block/ynmi72x5wt5ooljjafebhcarit3pvu6axkslqenikb2p5txe57@ldytqa2t4i2x/

#5: srp (rxe driver)

     All test cases in srp test group fail when it is run for rxe driver, due to
     rdma_rxe driver unload failure. This problem was already known and the fix
     patch is available [7].

     [7] https://lore.kernel.org/linux-rdma/20251219140408.2300163-1-metze@samba.org/


[3] KASAN slab-out-of-bounds during nvme/033

[   32.182619] [     T81] BUG: KASAN: slab-out-of-bounds in nvmet_passthru_execute_cmd_work+0xe0a/0x1750 [nvmet]
[   32.183718] [     T81] Read of size 256 at addr ffff888146030fc0 by task kworker/u16:4/81
p
[   32.184899] [     T81] CPU: 1 UID: 0 PID: 81 Comm: kworker/u16:4 Not tainted 6.19.0-rc1+ #69 PREEMPT(voluntary)
[   32.184903] [     T81] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-8.fc42 06/10/2025
[   32.184905] [     T81] Workqueue: nvmet-wq nvmet_passthru_execute_cmd_work [nvmet]
[   32.184919] [     T81] Call Trace:
[   32.184921] [     T81]  <TASK>
[   32.184923] [     T81]  dump_stack_lvl+0x6a/0x90
[   32.184939] [     T81]  ? nvmet_passthru_execute_cmd_work+0xe0a/0x1750 [nvmet]
[   32.184950] [     T81]  print_report+0x170/0x4f3
[   32.184954] [     T81]  ? __virt_addr_valid+0x22e/0x500
[   32.184958] [     T81]  ? nvmet_passthru_execute_cmd_work+0xe0a/0x1750 [nvmet]
[   32.184969] [     T81]  kasan_report+0xad/0x150
[   32.184974] [     T81]  ? nvmet_passthru_execute_cmd_work+0xe0a/0x1750 [nvmet]
[   32.184986] [     T81]  kasan_check_range+0x115/0x1f0
[   32.184989] [     T81]  __asan_memcpy+0x1f/0x60
[   32.184991] [     T81]  nvmet_passthru_execute_cmd_work+0xe0a/0x1750 [nvmet]
[   32.185003] [     T81]  ? lock_acquire+0x16a/0x2f0
[   32.185008] [     T81]  ? __pfx_nvmet_passthru_execute_cmd_work+0x10/0x10 [nvmet]
[   32.185019] [     T81]  ? lock_acquire+0x17a/0x2f0
[   32.185021] [     T81]  ? process_one_work+0x722/0x1490
[   32.185024] [     T81]  ? lock_release+0x1ab/0x2f0
[   32.185027] [     T81]  process_one_work+0x868/0x1490
[   32.185031] [     T81]  ? __pfx_process_one_work+0x10/0x10
[   32.185033] [     T81]  ? lock_acquire+0x16a/0x2f0
[   32.185037] [     T81]  ? assign_work+0x156/0x390
[   32.185040] [     T81]  worker_thread+0x5ee/0xfd0
[   32.185044] [     T81]  ? __pfx_worker_thread+0x10/0x10
[   32.185046] [     T81]  kthread+0x3af/0x770
[   32.185049] [     T81]  ? lock_acquire+0x17a/0x2f0
[   32.185051] [     T81]  ? __pfx_kthread+0x10/0x10
[   32.185053] [     T81]  ? __pfx_kthread+0x10/0x10
[   32.185055] [     T81]  ? ret_from_fork+0x6e/0x810
[   32.185058] [     T81]  ? lock_release+0x1ab/0x2f0
[   32.185060] [     T81]  ? rcu_is_watching+0x11/0xb0
[   32.185062] [     T81]  ? __pfx_kthread+0x10/0x10
[   32.185064] [     T81]  ret_from_fork+0x55c/0x810
[   32.185066] [     T81]  ? __pfx_ret_from_fork+0x10/0x10
[   32.185068] [     T81]  ? __switch_to+0x10a/0xda0
[   32.185072] [     T81]  ? __switch_to_asm+0x33/0x70
[   32.185074] [     T81]  ? __pfx_kthread+0x10/0x10
[   32.185077] [     T81]  ret_from_fork_asm+0x1a/0x30
[   32.185081] [     T81]  </TASK>

[   32.211854] [     T81] Allocated by task 1052:
[   32.212651] [     T81]  kasan_save_stack+0x2c/0x50
[   32.213499] [     T81]  kasan_save_track+0x10/0x30
[   32.214313] [     T81]  __kasan_kmalloc+0x96/0xb0
[   32.215104] [     T81]  __kmalloc_node_track_caller_noprof+0x2e7/0x8d0
[   32.216072] [     T81]  kstrndup+0x53/0xe0
[   32.216784] [     T81]  nvmet_subsys_alloc+0x243/0x680 [nvmet]
[   32.217710] [     T81]  nvmet_subsys_make+0x95/0x480 [nvmet]
[   32.218585] [     T81]  configfs_mkdir+0x457/0xb30
[   32.219365] [     T81]  vfs_mkdir+0x615/0x970
[   32.220093] [     T81]  do_mkdirat+0x3a1/0x500
[   32.220813] [     T81]  __x64_sys_mkdir+0xd3/0x160
[   32.221583] [     T81]  do_syscall_64+0x95/0x540
[   32.222333] [     T81]  entry_SYSCALL_64_after_hwframe+0x76/0x7e

[   32.223734] [     T81] The buggy address belongs to the object at ffff888146030fc0
                            which belongs to the cache kmalloc-32 of size 32
[   32.225728] [     T81] The buggy address is located 0 bytes inside of
                            allocated 21-byte region [ffff888146030fc0, ffff888146030fd5)

[   32.228244] [     T81] The buggy address belongs to the physical page:
[   32.229238] [     T81] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x146030
[   32.230466] [     T81] flags: 0x17ffffc0000000(node=0|zone=2|lastcpupid=0x1fffff)
[   32.231539] [     T81] page_type: f5(slab)
[   32.232239] [     T81] raw: 0017ffffc0000000 ffff888100042780 dead000000000122 0000000000000000
[   32.233443] [     T81] raw: 0000000000000000 0000000000400040 00000000f5000000 0000000000000000
[   32.234692] [     T81] page dumped because: kasan: bad access detected

[   32.236259] [     T81] Memory state around the buggy address:
[   32.237135] [     T81]  ffff888146030e80: fa fb fb fb fc fc fc fc fa fb fb fb fc fc fc fc
[   32.238303] [     T81]  ffff888146030f00: fa fb fb fb fc fc fc fc fa fb fb fb fc fc fc fc
[   32.239531] [     T81] >ffff888146030f80: fa fb fb fb fc fc fc fc 00 00 05 fc fc fc fc fc
[   32.240676] [     T81]                                                  ^
[   32.241679] [     T81]  ffff888146031000: fa fb fb fb fb fb fb fb fc fc fc fc fa fb fb fb
[   32.242843] [     T81]  ffff888146031080: fb fb fb fb fc fc fc fc fa fb fb fb fb fb fb fb


[5] WARN during nvme/058

Dec 21 12:33:40 testnode2 kernel: WARNING: block/blk-mq.c:321 at blk_mq_unquiesce_queue+0x9a/0xb0, CPU#1: kworker/u16:10/69854
Dec 21 12:33:40 testnode2 kernel: Modules linked in: nvme_fcloop nvmet_fc nvmet nvme_fc nvme_fabrics chacha chacha20poly1305 tls iw_cm ib_cm ib_core nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc 9pnet_virtio 9pnet netfs i2c_piix4 pcspkr i2c_smbus fuse loop dm_multipath nfnetlink vsock_loopback vmw_vsock_virtio_transport_common vsock zram xfs nvme bochs drm_client_lib drm_shmem_helper drm_kms_helper nvme_core drm nvme_keyring sym53c8xx nvme_auth floppy e1000 scsi_transport_spi hkdf serio_raw ata_generic pata_acpi i2c_dev qemu_fw_cfg [last unloaded: nvmet]
Dec 21 12:33:40 testnode2 kernel: CPU: 1 UID: 0 PID: 69854 Comm: kworker/u16:10 Not tainted 6.19.0-rc1+ #71 PREEMPT(voluntary)
Dec 21 12:33:40 testnode2 kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-8.fc42 06/10/2025
Dec 21 12:33:40 testnode2 kernel: Workqueue: nvme-reset-wq nvme_fc_reset_ctrl_work [nvme_fc]
Dec 21 12:33:40 testnode2 kernel: RIP: 0010:blk_mq_unquiesce_queue+0x9a/0xb0
Dec 21 12:33:40 testnode2 kernel: Code: 00 00 48 89 14 24 e8 d5 18 fc ff 48 8b 34 24 48 89 ef e8 b9 86 93 01 48 89 df be 01 00 00 00 48 83 c4 08 5b 5d e9 26 fa ff ff <0f> 0b eb bb 48 89 14 24 e8 89 34 79 ff 48 8b 14 24 eb 97 0f 1f 00
Dec 21 12:33:40 testnode2 kernel: RSP: 0018:ffff888136c67ab0 EFLAGS: 00010046
Dec 21 12:33:40 testnode2 kernel: RAX: 0000000000000000 RBX: ffff88812cd27260 RCX: 1ffff110259a4e6d
Dec 21 12:33:40 testnode2 kernel: RDX: 0000000000000282 RSI: 0000000000000004 RDI: ffff88812cd27368
Dec 21 12:33:40 testnode2 kernel: RBP: ffff88812cd27328 R08: ffffffff886d1354 R09: ffffed1026d8cf45
Dec 21 12:33:40 testnode2 kernel: R10: 0000000000000003 R11: 1ffff1103d07568e R12: ffff888107c78380
Dec 21 12:33:40 testnode2 kernel: R13: ffff88813be31840 R14: ffff888107c78040 R15: ffff88813be31818
Dec 21 12:33:40 testnode2 kernel: FS:  0000000000000000(0000) GS:ffff888420395000(0000) knlGS:0000000000000000
Dec 21 12:33:40 testnode2 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Dec 21 12:33:40 testnode2 kernel: CR2: 00007f8d6a8af000 CR3: 00000001b6b3e000 CR4: 00000000000006f0
Dec 21 12:33:40 testnode2 kernel: Call Trace:
Dec 21 12:33:40 testnode2 kernel:  <TASK>
Dec 21 12:33:40 testnode2 kernel:  ? _raw_spin_unlock_irqrestore+0x35/0x60
Dec 21 12:33:40 testnode2 kernel:  blk_mq_unquiesce_tagset+0xdd/0x1c0
Dec 21 12:33:40 testnode2 kernel:  nvme_fc_delete_association+0x4ee/0x700 [nvme_fc]
Dec 21 12:33:40 testnode2 kernel:  ? __pfx_nvme_fc_delete_association+0x10/0x10 [nvme_fc]
Dec 21 12:33:40 testnode2 kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
Dec 21 12:33:40 testnode2 kernel:  ? rcu_is_watching+0x11/0xb0
Dec 21 12:33:40 testnode2 kernel:  nvme_fc_reset_ctrl_work+0x27/0x110 [nvme_fc]
Dec 21 12:33:40 testnode2 kernel:  process_one_work+0x868/0x1490
Dec 21 12:33:40 testnode2 kernel:  ? __pfx_process_one_work+0x10/0x10
Dec 21 12:33:40 testnode2 kernel:  ? assign_work+0x156/0x390
Dec 21 12:33:40 testnode2 kernel:  worker_thread+0x5ee/0xfd0
Dec 21 12:33:40 testnode2 kernel:  ? __pfx_worker_thread+0x10/0x10
Dec 21 12:33:40 testnode2 kernel:  kthread+0x3af/0x770
Dec 21 12:33:40 testnode2 kernel:  ? lock_acquire+0x2a9/0x2f0
Dec 21 12:33:40 testnode2 kernel:  ? __pfx_kthread+0x10/0x10
Dec 21 12:33:40 testnode2 kernel:  ? finish_task_switch.isra.0+0x196/0x790
Dec 21 12:33:40 testnode2 kernel:  ? rcu_is_watching+0x11/0xb0
Dec 21 12:33:40 testnode2 kernel:  ? lock_release+0x242/0x2f0
Dec 21 12:33:40 testnode2 kernel:  ? rcu_is_watching+0x11/0xb0
Dec 21 12:33:40 testnode2 kernel:  ? __pfx_kthread+0x10/0x10
Dec 21 12:33:40 testnode2 kernel:  ret_from_fork+0x55c/0x810
Dec 21 12:33:40 testnode2 kernel:  ? __pfx_ret_from_fork+0x10/0x10
Dec 21 12:33:40 testnode2 kernel:  ? __switch_to+0x10a/0xda0
Dec 21 12:33:40 testnode2 kernel:  ? __switch_to_asm+0x33/0x70
Dec 21 12:33:40 testnode2 kernel:  ? __pfx_kthread+0x10/0x10
Dec 21 12:33:40 testnode2 kernel:  ret_from_fork_asm+0x1a/0x30
Dec 21 12:33:40 testnode2 kernel:  </TASK>
Dec 21 12:33:40 testnode2 kernel: irq event stamp: 0
Dec 21 12:33:40 testnode2 kernel: hardirqs last  enabled at (0): [<0000000000000000>] 0x0
Dec 21 12:33:40 testnode2 kernel: hardirqs last disabled at (0): [<ffffffff884f77b5>] copy_process+0x1dc5/0x6d40
Dec 21 12:33:40 testnode2 kernel: softirqs last  enabled at (0): [<ffffffff884f780d>] copy_process+0x1e1d/0x6d40
Dec 21 12:33:40 testnode2 kernel: softirqs last disabled at (0): [<0000000000000000>] 0x0


More information about the Linux-nvme mailing list