blktests failures with v6.14-rc1 kernel
Shinichiro Kawasaki
shinichiro.kawasaki at wdc.com
Thu Feb 6 17:24:09 PST 2025
Hi all,
I ran the latest blktests (git hash: 67aff550bd52) with the v6.14-rc1 kernel.
I observed 5 failures listed below. Comparing with the previous report with
the v6.13 kernel [1], one new failure was observed at zbd/009.
[1] https://lore.kernel.org/linux-nvme/rv3w2zcno7n3bgdy2ghxmedsqf23ptmakvjerbhopgxjsvgzmo@ioece7dyg2og/
List of failures
================
#1: block/002
#2: nvme/037 (fc transport)
#3: nvme/041 (fc transport)
#4: nvme/058 (loop transport)
#5: zbd/009 (new)
Two failures observed with the v6.13 kernel are not observed with the v6.14-rc1.
Failures no longer observed
===========================
#1: block/001:
It looks resolved by fixes in v6.14-rc1 kernel.
#2: throtl/001 (CKI project, s390 arch)
I was not able to find blktests runs by CKI project with the v6.14-rc1
kernel.
Failure description
===================
#1: block/002
This test case fails with a lockdep WARN "possible circular locking
dependency detected". The lockdep splats shows q->q_usage_counter as one
of the involved locks. It was observed with the v6.13-rc2 kernel [2], and
still observed with v6.14-rc1 kernel. It needs further debug.
[2] https://lore.kernel.org/linux-block/qskveo3it6rqag4xyleobe5azpxu6tekihao4qpdopvk44una2@y4lkoe6y3d6z/
#2: nvme/037 (fc transport)
#3: nvme/041 (fc transport)
These two test cases fail for fc transport. Refer to the report for v6.12
kernel [3].
[3] https://lore.kernel.org/linux-nvme/6crydkodszx5vq4ieox3jjpwkxtu7mhbohypy24awlo5w7f4k6@to3dcng24rd4/
#4: nvme/058 (loop transport)
This test case hangs occasionally with Oops and KASAN null-ptr-deref. It was
reported for the first time with the kernel v6.13 [1]. A fix patch candidate
was posted [4] (Thanks!). The patch needs further work.
[4] https://lore.kernel.org/linux-nvme/20250124082505.140258-1-hare@kernel.org/
#5: zbd/009 (new)
This test case fails with a lockdep WARN "possible circular locking
dependency detected" [5]. The lockdep splats shows q->q_usage_counter as one
of the involved locks. This is common as the block/002 failure. It needs
further debug.
[5] kernel message during zbd/009 run
[ 204.099296] [ T1004] run blktests zbd/009 at 2025-02-07 10:01:36
[ 204.155021] [ T1040] sd 9:0:0:0: [sdd] Synchronizing SCSI cache
[ 204.553613] [ T1041] scsi_debug:sdebug_driver_probe: scsi_debug: trim poll_queues to 0. poll_q/nr_hw = (0/1)
[ 204.554438] [ T1041] scsi host9: scsi_debug: version 0191 [20210520]
dev_size_mb=1024, opts=0x0, submit_queues=1, statistics=0
[ 204.558331] [ T1041] scsi 9:0:0:0: Direct-Access-ZBC Linux scsi_debug 0191 PQ: 0 ANSI: 7
[ 204.560269] [ C2] scsi 9:0:0:0: Power-on or device reset occurred
[ 204.562871] [ T1041] sd 9:0:0:0: Attached scsi generic sg3 type 20
[ 204.563013] [ T100] sd 9:0:0:0: [sdd] Host-managed zoned block device
[ 204.564518] [ T100] sd 9:0:0:0: [sdd] 262144 4096-byte logical blocks: (1.07 GB/1.00 GiB)
[ 204.565477] [ T100] sd 9:0:0:0: [sdd] Write Protect is off
[ 204.565948] [ T100] sd 9:0:0:0: [sdd] Mode Sense: 5b 00 10 08
[ 204.566245] [ T100] sd 9:0:0:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA
[ 204.567453] [ T100] sd 9:0:0:0: [sdd] permanent stream count = 5
[ 204.568276] [ T100] sd 9:0:0:0: [sdd] Preferred minimum I/O size 4096 bytes
[ 204.569067] [ T100] sd 9:0:0:0: [sdd] Optimal transfer size 4194304 bytes
[ 204.571080] [ T100] sd 9:0:0:0: [sdd] 256 zones of 1024 logical blocks
[ 204.593822] [ T100] sd 9:0:0:0: [sdd] Attached SCSI disk
[ 204.901514] [ T1067] BTRFS: device fsid 15196e63-e303-48ed-9dcb-9ec397479c42 devid 1 transid 8 /dev/sdd (8:48) scanned by mount (1067)
[ 204.910330] [ T1067] BTRFS info (device sdd): first mount of filesystem 15196e63-e303-48ed-9dcb-9ec397479c42
[ 204.913129] [ T1067] BTRFS info (device sdd): using crc32c (crc32c-generic) checksum algorithm
[ 204.914856] [ T1067] BTRFS info (device sdd): using free-space-tree
[ 204.925816] [ T1067] BTRFS info (device sdd): host-managed zoned block device /dev/sdd, 256 zones of 4194304 bytes
[ 204.929320] [ T1067] BTRFS info (device sdd): zoned mode enabled with zone size 4194304
[ 204.935403] [ T1067] BTRFS info (device sdd): checking UUID tree
[ 215.637712] [ T1103] BTRFS info (device sdd): last unmount of filesystem 15196e63-e303-48ed-9dcb-9ec397479c42
[ 215.762293] [ T1110] ======================================================
[ 215.763636] [ T1110] WARNING: possible circular locking dependency detected
[ 215.765092] [ T1110] 6.14.0-rc1 #252 Not tainted
[ 215.766271] [ T1110] ------------------------------------------------------
[ 215.767615] [ T1110] modprobe/1110 is trying to acquire lock:
[ 215.768999] [ T1110] ffff888100ac83e0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: __flush_work+0x38f/0xb60
[ 215.770700] [ T1110]
but task is already holding lock:
[ 215.773077] [ T1110] ffff8881205b6f20 (&q->q_usage_counter(queue)#16){++++}-{0:0}, at: sd_remove+0x85/0x130
[ 215.774685] [ T1110]
which lock already depends on the new lock.
[ 215.778184] [ T1110]
the existing dependency chain (in reverse order) is:
[ 215.780532] [ T1110]
-> #3 (&q->q_usage_counter(queue)#16){++++}-{0:0}:
[ 215.782937] [ T1110] blk_queue_enter+0x3d9/0x500
[ 215.784175] [ T1110] blk_mq_alloc_request+0x47d/0x8e0
[ 215.785434] [ T1110] scsi_execute_cmd+0x14f/0xb80
[ 215.786662] [ T1110] sd_zbc_do_report_zones+0x1c1/0x470
[ 215.787989] [ T1110] sd_zbc_report_zones+0x362/0xd60
[ 215.789222] [ T1110] blkdev_report_zones+0x1b1/0x2e0
[ 215.790448] [ T1110] btrfs_get_dev_zones+0x215/0x7e0 [btrfs]
[ 215.791887] [ T1110] btrfs_load_block_group_zone_info+0x6d2/0x2c10 [btrfs]
[ 215.793342] [ T1110] btrfs_make_block_group+0x36b/0x870 [btrfs]
[ 215.794752] [ T1110] btrfs_create_chunk+0x147d/0x2320 [btrfs]
[ 215.796150] [ T1110] btrfs_chunk_alloc+0x2ce/0xcf0 [btrfs]
[ 215.797474] [ T1110] start_transaction+0xce6/0x1620 [btrfs]
[ 215.798858] [ T1110] btrfs_uuid_scan_kthread+0x4ee/0x5b0 [btrfs]
[ 215.800334] [ T1110] kthread+0x39d/0x750
[ 215.801479] [ T1110] ret_from_fork+0x30/0x70
[ 215.802662] [ T1110] ret_from_fork_asm+0x1a/0x30
[ 215.803902] [ T1110]
-> #2 (&fs_info->dev_replace.rwsem){++++}-{4:4}:
[ 215.805993] [ T1110] down_read+0x9b/0x470
[ 215.807088] [ T1110] btrfs_map_block+0x2ce/0x2ce0 [btrfs]
[ 215.808366] [ T1110] btrfs_submit_chunk+0x2d4/0x16c0 [btrfs]
[ 215.809687] [ T1110] btrfs_submit_bbio+0x16/0x30 [btrfs]
[ 215.810983] [ T1110] btree_write_cache_pages+0xb5a/0xf90 [btrfs]
[ 215.812295] [ T1110] do_writepages+0x17f/0x7b0
[ 215.813416] [ T1110] __writeback_single_inode+0x114/0xb00
[ 215.814575] [ T1110] writeback_sb_inodes+0x52b/0xe00
[ 215.815717] [ T1110] wb_writeback+0x1a7/0x800
[ 215.816924] [ T1110] wb_workfn+0x12a/0xbd0
[ 215.817951] [ T1110] process_one_work+0x85a/0x1460
[ 215.818985] [ T1110] worker_thread+0x5e2/0xfc0
[ 215.820013] [ T1110] kthread+0x39d/0x750
[ 215.821000] [ T1110] ret_from_fork+0x30/0x70
[ 215.822010] [ T1110] ret_from_fork_asm+0x1a/0x30
[ 215.822988] [ T1110]
-> #1 (&fs_info->zoned_meta_io_lock){+.+.}-{4:4}:
[ 215.824855] [ T1110] __mutex_lock+0x1aa/0x1360
[ 215.825856] [ T1110] btree_write_cache_pages+0x252/0xf90 [btrfs]
[ 215.827089] [ T1110] do_writepages+0x17f/0x7b0
[ 215.828027] [ T1110] __writeback_single_inode+0x114/0xb00
[ 215.829141] [ T1110] writeback_sb_inodes+0x52b/0xe00
[ 215.830129] [ T1110] wb_writeback+0x1a7/0x800
[ 215.831084] [ T1110] wb_workfn+0x12a/0xbd0
[ 215.831950] [ T1110] process_one_work+0x85a/0x1460
[ 215.832862] [ T1110] worker_thread+0x5e2/0xfc0
[ 215.833826] [ T1110] kthread+0x39d/0x750
[ 215.834715] [ T1110] ret_from_fork+0x30/0x70
[ 215.835669] [ T1110] ret_from_fork_asm+0x1a/0x30
[ 215.836594] [ T1110]
-> #0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}:
[ 215.838347] [ T1110] __lock_acquire+0x2f52/0x5ea0
[ 215.839258] [ T1110] lock_acquire+0x1b1/0x540
[ 215.840156] [ T1110] __flush_work+0x3ac/0xb60
[ 215.841041] [ T1110] wb_shutdown+0x15b/0x1f0
[ 215.841915] [ T1110] bdi_unregister+0x172/0x5b0
[ 215.842793] [ T1110] del_gendisk+0x841/0xa20
[ 215.843724] [ T1110] sd_remove+0x85/0x130
[ 215.844660] [ T1110] device_release_driver_internal+0x368/0x520
[ 215.845757] [ T1110] bus_remove_device+0x1f1/0x3f0
[ 215.846755] [ T1110] device_del+0x3bd/0x9c0
[ 215.847712] [ T1110] __scsi_remove_device+0x272/0x340
[ 215.848727] [ T1110] scsi_forget_host+0xf7/0x170
[ 215.849710] [ T1110] scsi_remove_host+0xd2/0x2a0
[ 215.850682] [ T1110] sdebug_driver_remove+0x52/0x2f0 [scsi_debug]
[ 215.851788] [ T1110] device_release_driver_internal+0x368/0x520
[ 215.852853] [ T1110] bus_remove_device+0x1f1/0x3f0
[ 215.853885] [ T1110] device_del+0x3bd/0x9c0
[ 215.854840] [ T1110] device_unregister+0x13/0xa0
[ 215.855850] [ T1110] sdebug_do_remove_host+0x1fb/0x290 [scsi_debug]
[ 215.856947] [ T1110] scsi_debug_exit+0x17/0x70 [scsi_debug]
[ 215.857968] [ T1110] __do_sys_delete_module.isra.0+0x321/0x520
[ 215.858999] [ T1110] do_syscall_64+0x93/0x180
[ 215.859930] [ T1110] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 215.860974] [ T1110]
other info that might help us debug this:
[ 215.863317] [ T1110] Chain exists of:
(work_completion)(&(&wb->dwork)->work) --> &fs_info->dev_replace.rwsem --> &q->q_usage_counter(queue)#16
[ 215.866277] [ T1110] Possible unsafe locking scenario:
[ 215.867927] [ T1110] CPU0 CPU1
[ 215.868904] [ T1110] ---- ----
[ 215.869880] [ T1110] lock(&q->q_usage_counter(queue)#16);
[ 215.870878] [ T1110] lock(&fs_info->dev_replace.rwsem);
[ 215.872075] [ T1110] lock(&q->q_usage_counter(queue)#16);
[ 215.873274] [ T1110] lock((work_completion)(&(&wb->dwork)->work));
[ 215.874332] [ T1110]
*** DEADLOCK ***
[ 215.876625] [ T1110] 5 locks held by modprobe/1110:
[ 215.877579] [ T1110] #0: ffff88811f7bc108 (&dev->mutex){....}-{4:4}, at: device_release_driver_internal+0x8f/0x520
[ 215.879029] [ T1110] #1: ffff8881022ee0e0 (&shost->scan_mutex){+.+.}-{4:4}, at: scsi_remove_host+0x20/0x2a0
[ 215.880402] [ T1110] #2: ffff88811b4c4378 (&dev->mutex){....}-{4:4}, at: device_release_driver_internal+0x8f/0x520
[ 215.881861] [ T1110] #3: ffff8881205b6f20 (&q->q_usage_counter(queue)#16){++++}-{0:0}, at: sd_remove+0x85/0x130
[ 215.883302] [ T1110] #4: ffffffffa3284360 (rcu_read_lock){....}-{1:3}, at: __flush_work+0xda/0xb60
[ 215.884667] [ T1110]
stack backtrace:
[ 215.886418] [ T1110] CPU: 0 UID: 0 PID: 1110 Comm: modprobe Not tainted 6.14.0-rc1 #252
[ 215.886422] [ T1110] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-3.fc41 04/01/2014
[ 215.886425] [ T1110] Call Trace:
[ 215.886430] [ T1110] <TASK>
[ 215.886432] [ T1110] dump_stack_lvl+0x6a/0x90
[ 215.886440] [ T1110] print_circular_bug.cold+0x1e0/0x274
[ 215.886445] [ T1110] check_noncircular+0x306/0x3f0
[ 215.886449] [ T1110] ? __pfx_check_noncircular+0x10/0x10
[ 215.886452] [ T1110] ? mark_lock+0xf5/0x1650
[ 215.886454] [ T1110] ? __pfx_check_irq_usage+0x10/0x10
[ 215.886458] [ T1110] ? lockdep_lock+0xca/0x1c0
[ 215.886460] [ T1110] ? __pfx_lockdep_lock+0x10/0x10
[ 215.886464] [ T1110] __lock_acquire+0x2f52/0x5ea0
[ 215.886469] [ T1110] ? __pfx___lock_acquire+0x10/0x10
[ 215.886473] [ T1110] ? __pfx_mark_lock+0x10/0x10
[ 215.886476] [ T1110] lock_acquire+0x1b1/0x540
[ 215.886479] [ T1110] ? __flush_work+0x38f/0xb60
[ 215.886482] [ T1110] ? __pfx_lock_acquire+0x10/0x10
[ 215.886485] [ T1110] ? __pfx_lock_release+0x10/0x10
[ 215.886488] [ T1110] ? mark_held_locks+0x94/0xe0
[ 215.886492] [ T1110] ? __flush_work+0x38f/0xb60
[ 215.886494] [ T1110] __flush_work+0x3ac/0xb60
[ 215.886498] [ T1110] ? __flush_work+0x38f/0xb60
[ 215.886501] [ T1110] ? __pfx_mark_lock+0x10/0x10
[ 215.886503] [ T1110] ? __pfx___flush_work+0x10/0x10
[ 215.886506] [ T1110] ? __pfx_wq_barrier_func+0x10/0x10
[ 215.886515] [ T1110] ? __pfx___might_resched+0x10/0x10
[ 215.886520] [ T1110] ? mark_held_locks+0x94/0xe0
[ 215.886524] [ T1110] wb_shutdown+0x15b/0x1f0
[ 215.886527] [ T1110] bdi_unregister+0x172/0x5b0
[ 215.886530] [ T1110] ? __pfx_bdi_unregister+0x10/0x10
[ 215.886535] [ T1110] ? up_write+0x1ba/0x510
[ 215.886539] [ T1110] del_gendisk+0x841/0xa20
[ 215.886543] [ T1110] ? __pfx_del_gendisk+0x10/0x10
[ 215.886546] [ T1110] ? _raw_spin_unlock_irqrestore+0x35/0x60
[ 215.886550] [ T1110] ? __pm_runtime_resume+0x79/0x110
[ 215.886556] [ T1110] sd_remove+0x85/0x130
[ 215.886558] [ T1110] device_release_driver_internal+0x368/0x520
[ 215.886563] [ T1110] ? kobject_put+0x5d/0x4a0
[ 215.886567] [ T1110] bus_remove_device+0x1f1/0x3f0
[ 215.886570] [ T1110] device_del+0x3bd/0x9c0
[ 215.886574] [ T1110] ? __pfx_device_del+0x10/0x10
[ 215.886578] [ T1110] __scsi_remove_device+0x272/0x340
[ 215.886581] [ T1110] scsi_forget_host+0xf7/0x170
[ 215.886585] [ T1110] scsi_remove_host+0xd2/0x2a0
[ 215.886587] [ T1110] sdebug_driver_remove+0x52/0x2f0 [scsi_debug]
[ 215.886600] [ T1110] ? kernfs_remove_by_name_ns+0xc0/0xf0
[ 215.886607] [ T1110] device_release_driver_internal+0x368/0x520
[ 215.886610] [ T1110] ? kobject_put+0x5d/0x4a0
[ 215.886613] [ T1110] bus_remove_device+0x1f1/0x3f0
[ 215.886616] [ T1110] device_del+0x3bd/0x9c0
[ 215.886619] [ T1110] ? __pfx_device_del+0x10/0x10
[ 215.886621] [ T1110] ? __pfx___mutex_unlock_slowpath+0x10/0x10
[ 215.886626] [ T1110] device_unregister+0x13/0xa0
[ 215.886628] [ T1110] sdebug_do_remove_host+0x1fb/0x290 [scsi_debug]
[ 215.886640] [ T1110] scsi_debug_exit+0x17/0x70 [scsi_debug]
[ 215.886652] [ T1110] __do_sys_delete_module.isra.0+0x321/0x520
[ 215.886655] [ T1110] ? __pfx___do_sys_delete_module.isra.0+0x10/0x10
[ 215.886657] [ T1110] ? __pfx_slab_free_after_rcu_debug+0x10/0x10
[ 215.886665] [ T1110] ? kasan_save_stack+0x2c/0x50
[ 215.886670] [ T1110] ? kasan_record_aux_stack+0xa3/0xb0
[ 215.886673] [ T1110] ? __call_rcu_common.constprop.0+0xc4/0xfb0
[ 215.886677] [ T1110] ? kmem_cache_free+0x3a0/0x590
[ 215.886679] [ T1110] ? __x64_sys_close+0x78/0xd0
[ 215.886687] [ T1110] do_syscall_64+0x93/0x180
[ 215.886694] [ T1110] ? lock_is_held_type+0xd5/0x130
[ 215.886697] [ T1110] ? __call_rcu_common.constprop.0+0x3c0/0xfb0
[ 215.886699] [ T1110] ? lockdep_hardirqs_on+0x78/0x100
[ 215.886701] [ T1110] ? __call_rcu_common.constprop.0+0x3c0/0xfb0
[ 215.886705] [ T1110] ? __pfx___call_rcu_common.constprop.0+0x10/0x10
[ 215.886710] [ T1110] ? kmem_cache_free+0x3a0/0x590
[ 215.886713] [ T1110] ? lockdep_hardirqs_on_prepare+0x16d/0x400
[ 215.886715] [ T1110] ? do_syscall_64+0x9f/0x180
[ 215.886717] [ T1110] ? lockdep_hardirqs_on+0x78/0x100
[ 215.886719] [ T1110] ? do_syscall_64+0x9f/0x180
[ 215.886721] [ T1110] ? __pfx___x64_sys_openat+0x10/0x10
[ 215.886725] [ T1110] ? lockdep_hardirqs_on_prepare+0x16d/0x400
[ 215.886727] [ T1110] ? do_syscall_64+0x9f/0x180
[ 215.886729] [ T1110] ? lockdep_hardirqs_on+0x78/0x100
[ 215.886731] [ T1110] ? do_syscall_64+0x9f/0x180
[ 215.886734] [ T1110] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 215.886737] [ T1110] RIP: 0033:0x7f436712b68b
[ 215.886741] [ T1110] Code: 73 01 c3 48 8b 0d 8d a7 0c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 5d a7 0c 00 f7 d8 64 89 01 48
[ 215.886743] [ T1110] RSP: 002b:00007ffe9f1a8658 EFLAGS: 00000206 ORIG_RAX: 00000000000000b0
[ 215.886750] [ T1110] RAX: ffffffffffffffda RBX: 00005559b367fd80 RCX: 00007f436712b68b
[ 215.886753] [ T1110] RDX: 0000000000000000 RSI: 0000000000000800 RDI: 00005559b367fde8
[ 215.886754] [ T1110] RBP: 00007ffe9f1a8680 R08: 1999999999999999 R09: 0000000000000000
[ 215.886756] [ T1110] R10: 00007f43671a5fe0 R11: 0000000000000206 R12: 0000000000000000
[ 215.886757] [ T1110] R13: 00007ffe9f1a86b0 R14: 0000000000000000 R15: 0000000000000000
[ 215.886761] [ T1110] </TASK>
[ 215.989918] [ T1110] sd 9:0:0:0: [sdd] Synchronizing SCSI cache
More information about the Linux-nvme
mailing list