blktests failures with v7.0-rc1 kernel
John Garry
john.g.garry at oracle.com
Thu Feb 26 01:18:35 PST 2026
On 26/02/2026 08:09, Shinichiro Kawasaki wrote:
> Hi all,
>
> I ran the latest blktests (git hash: f14914d04256) with the v7.0-rc1 kernel. I
> observed 8 failures listed below. Comparing with the previous report with the
> v6.19 kernel [1], 1 failure was resolved (zbd/013) and 3 failures are newly
> observed (blktrace/002, nvme/060 and nvme/061 fc transport). Your actions to
> fix the failures will be welcomed as always. Especially, nvme/058 and nvme/061
> are causing test run hangs for fc transport. Fixing of the hangs will be highly
> appreciated.
JFYI, I saw this splat for nvme/033 on nvme-7.0 branch *:
[ 15.525025] systemd-journald[347]:
/var/log/journal/89df182291654cc0b051327dd5a58135/user-1000.journal:
Journal file uses a different sequence number ID, rotating.
[ 21.339287] run blktests nvme/033 at 2026-02-26 08:45:20
[ 21.522168] nvmet: Created nvm controller 1 for subsystem
blktests-subsystem-1 for NQN
nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
[ 21.527332]
==================================================================
[ 21.527408] BUG: KASAN: slab-out-of-bounds in
nvmet_passthru_execute_cmd_work+0xf94/0x1a80 [nvmet]
[ 21.527494] Read of size 256 at addr ffff888100be2bc0 by task
kworker/u17:2/50
[ 21.527580] CPU: 0 UID: 0 PID: 50 Comm: kworker/u17:2 Not tainted
6.19.0-rc3-00080-g6c7172c14e92 #37 PREEMPT(voluntary)
[ 21.527589] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009),
BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 21.527594] Workqueue: nvmet-wq nvmet_passthru_execute_cmd_work [nvmet]
[ 21.527636] Call Trace:
[ 21.527639] <TASK>
[ 21.527643] dump_stack_lvl+0x91/0xf0
[ 21.527695] print_report+0xd1/0x660
[ 21.527710] ? __virt_addr_valid+0x23a/0x440
[ 21.527721] ? kasan_complete_mode_report_info+0x26/0x200
[ 21.527733] kasan_report+0xf3/0x130
[ 21.527739] ? nvmet_passthru_execute_cmd_work+0xf94/0x1a80 [nvmet]
[ 21.527776] ? nvmet_passthru_execute_cmd_work+0xf94/0x1a80 [nvmet]
[ 21.527816] kasan_check_range+0x11c/0x200
[ 21.527824] __asan_memcpy+0x23/0x80
[ 21.527834] nvmet_passthru_execute_cmd_work+0xf94/0x1a80 [nvmet]
[ 21.527875] ? __pfx_nvmet_passthru_execute_cmd_work+0x10/0x10 [nvmet]
[ 21.527910] ? _raw_spin_unlock_irq+0x27/0x70
[ 21.527920] ? _raw_spin_unlock_irq+0x27/0x70
[ 21.527930] process_one_work+0x84b/0x19a0
[ 21.527946] ? __pfx_process_one_work+0x10/0x10
[ 21.527951] ? do_raw_spin_lock+0x136/0x290
[ 21.527964] ? assign_work+0x16f/0x280
[ 21.527970] ? lock_is_held_type+0xa3/0x130
[ 21.527982] worker_thread+0x6f0/0x11f0
[ 21.527998] ? __pfx_worker_thread+0x10/0x10
[ 21.528004] kthread+0x3dd/0x8a0
[ 21.528010] ? _raw_spin_unlock_irq+0x27/0x70
[ 21.528026] ? __pfx_kthread+0x10/0x10
[ 21.528031] ? trace_hardirqs_on+0x22/0x130
[ 21.528043] ? _raw_spin_unlock_irq+0x27/0x70
[ 21.528052] ? __pfx_kthread+0x10/0x10
[ 21.528059] ret_from_fork+0x65e/0x810
[ 21.528069] ? __pfx_ret_from_fork+0x10/0x10
[ 21.528077] ? __switch_to+0x385/0xdf0
[ 21.528087] ? __pfx_kthread+0x10/0x10
[ 21.528094] ret_from_fork_asm+0x1a/0x30
[ 21.528113] </TASK>
[ 21.546173] Allocated by task 1328:
[ 21.546459] kasan_save_stack+0x39/0x70
[ 21.546467] kasan_save_track+0x14/0x40
[ 21.546469] kasan_save_alloc_info+0x37/0x60
[ 21.546472] __kasan_kmalloc+0xc3/0xd0
[ 21.546474] __kmalloc_node_track_caller_noprof+0x2b6/0x950
[ 21.546478] kstrndup+0x5c/0x120
[ 21.546482] nvmet_subsys_alloc+0x361/0x750 [nvmet]
[ 21.546501] nvmet_subsys_make+0x9c/0x420 [nvmet]
[ 21.546513] configfs_mkdir+0x484/0xe60
[ 21.546518] vfs_mkdir+0x631/0x990
[ 21.546522] do_mkdirat+0x3e6/0x550
[ 21.546524] __x64_sys_mkdir+0xd7/0x130
[ 21.546527] x64_sys_call+0x1fb1/0x26b0
[ 21.546532] do_syscall_64+0x91/0x520
[ 21.546536] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 21.546800] The buggy address belongs to the object at ffff888100be2bc0
which belongs to the cache kmalloc-rnd-02-32 of size 32
[ 21.547326] The buggy address is located 0 bytes inside of
allocated 21-byte region [ffff888100be2bc0,
ffff888100be2bd5)
[ 21.548115] The buggy address belongs to the physical page:
[ 21.548374] page: refcount:0 mapcount:0 mapping:0000000000000000
index:0xffff888100be2780 pfn:0x100be2
[ 21.548377] anon flags:
0x17ffffc0000000(node=0|zone=2|lastcpupid=0x1fffff)
[ 21.548380] page_type: f5(slab)
[ 21.548385] raw: 0017ffffc0000000 ffff8881000488c0 0000000000000000
0000000000000001
[ 21.548387] raw: ffff888100be2780 000000008040003e 00000000f5000000
0000000000000000
[ 21.548389] page dumped because: kasan: bad access detected
[ 21.548645] Memory state around the buggy address:
[ 21.548895] ffff888100be2a80: fa fb fb fb fc fc fc fc fa fb fb fb
fc fc fc fc
[ 21.549150] ffff888100be2b00: fa fb fb fb fc fc fc fc fa fb fb fb
fc fc fc fc
[ 21.549393] >ffff888100be2b80: fa fb fb fb fc fc fc fc 00 00 05 fc
fc fc fc fc
[ 21.549635] ^
[ 21.549867] ffff888100be2c00: fa fb fb fb fc fc fc fc fa fb fb fb
fc fc fc fc
[ 21.550103] ffff888100be2c80: fa fb fb fb fc fc fc fc fa fb fb fb
fc fc fc fc
[ 21.550339]
==================================================================
[ 21.550647] Disabling lock debugging due to kernel taint
[ 21.552579] nvme nvme1: creating 4 I/O queues.
[ 21.554074] nvme nvme1: new ctrl: "blktests-subsystem-1"
[ 21.556678] nvme nvme1: Duplicate unshared namespace 1
[ 24.849982] nvme nvme1: Removing ctrl: NQN
"nqn.2019-08.org.qemu:0eadbee1"
[ 49.368933] SGI XFS with ACLs, security attributes, realtime,
quota, no debug enabled
[ 49.377335] XFS (nvme0n1): Mounting V5 Filesystem
48f88624-c9cd-487e-87a2-de1c0c1cb9b6
[ 49.392190] XFS (nvme0n1): Ending clean mount
* this branch did not seem to make v7.0, and I did not test v7.0
More information about the Linux-nvme
mailing list