blktests failures with v6.14-rc1 kernel
Shinichiro Kawasaki
shinichiro.kawasaki at wdc.com
Fri Feb 7 04:22:32 PST 2025
On Feb 07, 2025 / 16:52, Ming Lei wrote:
> Hi Shinichiro,
Hi Ming, thanks for the comments. Let me comment on the block/002 failure.
> > Failure description
> > ===================
> >
> > #1: block/002
> >
> > This test case fails with a lockdep WARN "possible circular locking
> > dependency detected". The lockdep splats shows q->q_usage_counter as one
> > of the involved locks. It was observed with the v6.13-rc2 kernel [2], and
> > still observed with v6.14-rc1 kernel. It needs further debug.
> >
> > [2] https://lore.kernel.org/linux-block/qskveo3it6rqag4xyleobe5azpxu6tekihao4qpdopvk44una2@y4lkoe6y3d6z/
>
> [ 342.568086][ T1023] -> #0 (&mm->mmap_lock){++++}-{4:4}:
> [ 342.569658][ T1023] __lock_acquire+0x2e8b/0x6010
> [ 342.570577][ T1023] lock_acquire+0x1b1/0x540
> [ 342.571463][ T1023] __might_fault+0xb9/0x120
> [ 342.572338][ T1023] _copy_from_user+0x34/0xa0
> [ 342.573231][ T1023] __blk_trace_setup+0xa0/0x140
> [ 342.574129][ T1023] blk_trace_ioctl+0x14e/0x270
> [ 342.575033][ T1023] blkdev_ioctl+0x38f/0x5c0
> [ 342.575919][ T1023] __x64_sys_ioctl+0x130/0x190
> [ 342.576824][ T1023] do_syscall_64+0x93/0x180
> [ 342.577714][ T1023] entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
> The above dependency between ->mmap_lock and ->debugfs_mutex has been cut by
> commit b769a2f409e7 ("blktrace: move copy_[to|from]_user() out of ->debugfs_lock"),
> so I'd suggest to double check this one.
Thanks. I missed the fix. Said that, I do still see the lockdep WARN "possible
circular locking dependency detected" with the kernel v6.14-rc1. Then I guess
there are two problems and I confused them. One problem was fixed by the commit
b769a2f409e7, and the other problem that I still observe.
Please take a look in the kernel message below, which was observed at the
block/002 failure I have just recreated on my test node. The splat indicates the
dependency different from that observed with v6.13-rc2 kernel.
[ 165.526908] [ T1103] run blktests block/002 at 2025-02-07 21:02:22
[ 165.814157] [ T1134] sd 9:0:0:0: [sdd] Synchronizing SCSI cache
[ 166.031013] [ T1135] scsi_debug:sdebug_driver_probe: scsi_debug: trim poll_queues to 0. poll_q/nr_hw = (0/1)
[ 166.031986] [ T1135] scsi host9: scsi_debug: version 0191 [20210520]
dev_size_mb=8, opts=0x0, submit_queues=1, statistics=0
[ 166.035727] [ T1135] scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7
[ 166.038449] [ C1] scsi 9:0:0:0: Power-on or device reset occurred
[ 166.045105] [ T1135] sd 9:0:0:0: Attached scsi generic sg3 type 0
[ 166.046426] [ T94] sd 9:0:0:0: [sdd] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB)
[ 166.048275] [ T94] sd 9:0:0:0: [sdd] Write Protect is off
[ 166.048854] [ T94] sd 9:0:0:0: [sdd] Mode Sense: 73 00 10 08
[ 166.051019] [ T94] sd 9:0:0:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA
[ 166.059601] [ T94] sd 9:0:0:0: [sdd] permanent stream count = 5
[ 166.063623] [ T94] sd 9:0:0:0: [sdd] Preferred minimum I/O size 512 bytes
[ 166.064329] [ T94] sd 9:0:0:0: [sdd] Optimal transfer size 524288 bytes
[ 166.094781] [ T94] sd 9:0:0:0: [sdd] Attached SCSI disk
[ 166.855819] [ T1161] ======================================================
[ 166.856339] [ T1161] WARNING: possible circular locking dependency detected
[ 166.856945] [ T1161] 6.14.0-rc1 #252 Not tainted
[ 166.857292] [ T1161] ------------------------------------------------------
[ 166.857874] [ T1161] blktrace/1161 is trying to acquire lock:
[ 166.858310] [ T1161] ffff88811dbfe5e0 (&mm->mmap_lock){++++}-{4:4}, at: __might_fault+0x99/0x120
[ 166.859053] [ T1161]
but task is already holding lock:
[ 166.859593] [ T1161] ffff8881082a1078 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: relay_file_read+0xa3/0x8a0
[ 166.860410] [ T1161]
which lock already depends on the new lock.
[ 166.861269] [ T1161]
the existing dependency chain (in reverse order) is:
[ 166.863693] [ T1161]
-> #5 (&sb->s_type->i_mutex_key#3){++++}-{4:4}:
[ 166.866064] [ T1161] down_write+0x8d/0x200
[ 166.867266] [ T1161] start_creating.part.0+0x82/0x230
[ 166.868544] [ T1161] debugfs_create_dir+0x3a/0x4c0
[ 166.869797] [ T1161] blk_register_queue+0x12d/0x430
[ 166.870986] [ T1161] add_disk_fwnode+0x6b1/0x1010
[ 166.872144] [ T1161] sd_probe+0x94e/0xf30
[ 166.873262] [ T1161] really_probe+0x1e3/0x8a0
[ 166.874372] [ T1161] __driver_probe_device+0x18c/0x370
[ 166.875544] [ T1161] driver_probe_device+0x4a/0x120
[ 166.876715] [ T1161] __device_attach_driver+0x15e/0x270
[ 166.877890] [ T1161] bus_for_each_drv+0x114/0x1a0
[ 166.878999] [ T1161] __device_attach_async_helper+0x19c/0x240
[ 166.880180] [ T1161] async_run_entry_fn+0x96/0x4f0
[ 166.881312] [ T1161] process_one_work+0x85a/0x1460
[ 166.882411] [ T1161] worker_thread+0x5e2/0xfc0
[ 166.883483] [ T1161] kthread+0x39d/0x750
[ 166.884548] [ T1161] ret_from_fork+0x30/0x70
[ 166.885629] [ T1161] ret_from_fork_asm+0x1a/0x30
[ 166.886728] [ T1161]
-> #4 (&q->debugfs_mutex){+.+.}-{4:4}:
[ 166.888799] [ T1161] __mutex_lock+0x1aa/0x1360
[ 166.889863] [ T1161] blk_mq_init_sched+0x3b5/0x5e0
[ 166.890907] [ T1161] elevator_switch+0x149/0x4b0
[ 166.891928] [ T1161] elv_iosched_store+0x29f/0x380
[ 166.892966] [ T1161] queue_attr_store+0x313/0x480
[ 166.893976] [ T1161] kernfs_fop_write_iter+0x39e/0x5a0
[ 166.895012] [ T1161] vfs_write+0x5f9/0xe90
[ 166.895970] [ T1161] ksys_write+0xf6/0x1c0
[ 166.896931] [ T1161] do_syscall_64+0x93/0x180
[ 166.897886] [ T1161] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 166.898945] [ T1161]
-> #3 (&q->q_usage_counter(io)#2){++++}-{0:0}:
[ 166.900757] [ T1161] blk_mq_submit_bio+0x19b8/0x2070
[ 166.901784] [ T1161] __submit_bio+0x36b/0x6d0
[ 166.902748] [ T1161] submit_bio_noacct_nocheck+0x542/0xca0
[ 166.903796] [ T1161] iomap_readahead+0x4c4/0x7e0
[ 166.904778] [ T1161] read_pages+0x198/0xaf0
[ 166.905718] [ T1161] page_cache_ra_order+0x4d3/0xb50
[ 166.906709] [ T1161] filemap_get_pages+0x2c7/0x1850
[ 166.907684] [ T1161] filemap_read+0x31d/0xc30
[ 166.908617] [ T1161] xfs_file_buffered_read+0x1e9/0x2a0 [xfs]
[ 166.909910] [ T1161] xfs_file_read_iter+0x25f/0x4a0 [xfs]
[ 166.911131] [ T1161] vfs_read+0x6bc/0xa20
[ 166.911994] [ T1161] ksys_read+0xf6/0x1c0
[ 166.912862] [ T1161] do_syscall_64+0x93/0x180
[ 166.913736] [ T1161] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 166.914751] [ T1161]
-> #2 (mapping.invalidate_lock#2){++++}-{4:4}:
[ 166.916401] [ T1161] down_read+0x9b/0x470
[ 166.917285] [ T1161] filemap_fault+0x231/0x2ac0
[ 166.918179] [ T1161] __do_fault+0xf4/0x5d0
[ 166.919039] [ T1161] do_fault+0x965/0x11d0
[ 166.919895] [ T1161] __handle_mm_fault+0x1060/0x1f40
[ 166.920829] [ T1161] handle_mm_fault+0x21a/0x6b0
[ 166.921722] [ T1161] do_user_addr_fault+0x322/0xaa0
[ 166.922648] [ T1161] exc_page_fault+0x7a/0x110
[ 166.923540] [ T1161] asm_exc_page_fault+0x22/0x30
[ 166.924466] [ T1161]
-> #1 (&vma->vm_lock->lock){++++}-{4:4}:
[ 166.926085] [ T1161] down_write+0x8d/0x200
[ 166.926943] [ T1161] vma_link+0x1ff/0x590
[ 166.927794] [ T1161] insert_vm_struct+0x15a/0x340
[ 166.928713] [ T1161] alloc_bprm+0x626/0xbf0
[ 166.929578] [ T1161] kernel_execve+0x85/0x2f0
[ 166.930476] [ T1161] call_usermodehelper_exec_async+0x21b/0x430
[ 166.931481] [ T1161] ret_from_fork+0x30/0x70
[ 166.932371] [ T1161] ret_from_fork_asm+0x1a/0x30
[ 166.933291] [ T1161]
-> #0 (&mm->mmap_lock){++++}-{4:4}:
[ 166.934868] [ T1161] __lock_acquire+0x2f52/0x5ea0
[ 166.935768] [ T1161] lock_acquire+0x1b1/0x540
[ 166.936678] [ T1161] __might_fault+0xb9/0x120
[ 166.937563] [ T1161] _copy_to_user+0x1e/0x80
[ 166.938438] [ T1161] relay_file_read+0x149/0x8a0
[ 166.939343] [ T1161] full_proxy_read+0x110/0x1d0
[ 166.940224] [ T1161] vfs_read+0x1bb/0xa20
[ 166.941082] [ T1161] ksys_read+0xf6/0x1c0
[ 166.941920] [ T1161] do_syscall_64+0x93/0x180
[ 166.942780] [ T1161] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 166.943755] [ T1161]
other info that might help us debug this:
[ 166.945985] [ T1161] Chain exists of:
&mm->mmap_lock --> &q->debugfs_mutex --> &sb->s_type->i_mutex_key#3
[ 166.948504] [ T1161] Possible unsafe locking scenario:
[ 166.950078] [ T1161] CPU0 CPU1
[ 166.950965] [ T1161] ---- ----
[ 166.951849] [ T1161] lock(&sb->s_type->i_mutex_key#3);
[ 166.952751] [ T1161] lock(&q->debugfs_mutex);
[ 166.953789] [ T1161] lock(&sb->s_type->i_mutex_key#3);
[ 166.954860] [ T1161] rlock(&mm->mmap_lock);
[ 166.955689] [ T1161]
*** DEADLOCK ***
[ 166.957776] [ T1161] 2 locks held by blktrace/1161:
[ 166.958623] [ T1161] #0: ffff88813c9ebcf8 (&f->f_pos_lock){+.+.}-{4:4}, at: fdget_pos+0x233/0x2f0
[ 166.959808] [ T1161] #1: ffff8881082a1078 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: relay_file_read+0xa3/0x8a0
[ 166.961118] [ T1161]
stack backtrace:
[ 166.962578] [ T1161] CPU: 2 UID: 0 PID: 1161 Comm: blktrace Not tainted 6.14.0-rc1 #252
[ 166.962582] [ T1161] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-3.fc41 04/01/2014
[ 166.962585] [ T1161] Call Trace:
[ 166.962586] [ T1161] <TASK>
p[ 166.962588] [ T1161] dump_stack_lvl+0x6a/0x90
[ 166.962593] [ T1161] print_circular_bug.cold+0x1e0/0x274
[ 166.962597] [ T1161] check_noncircular+0x306/0x3f0
[ 166.962600] [ T1161] ? __pfx_check_noncircular+0x10/0x10
[ 166.962605] [ T1161] ? lockdep_lock+0xca/0x1c0
[ 166.962607] [ T1161] ? __pfx_lockdep_lock+0x10/0x10
[ 166.962611] [ T1161] __lock_acquire+0x2f52/0x5ea0
[ 166.962616] [ T1161] ? __pfx___lock_acquire+0x10/0x10
[ 166.962619] [ T1161] ? lock_acquire+0x1b1/0x540
[ 166.962623] [ T1161] lock_acquire+0x1b1/0x540
[ 166.962625] [ T1161] ? __might_fault+0x99/0x120
[ 166.962628] [ T1161] ? __pfx_lock_acquire+0x10/0x10
[ 166.962631] [ T1161] ? lock_is_held_type+0xd5/0x130
[ 166.962635] [ T1161] ? __pfx___might_resched+0x10/0x10
[ 166.962637] [ T1161] ? _raw_spin_unlock+0x29/0x50
[ 166.962640] [ T1161] ? __might_fault+0x99/0x120
[ 166.962642] [ T1161] __might_fault+0xb9/0x120
[ 166.962649] [ T1161] ? __might_fault+0x99/0x120
[ 166.962651] [ T1161] ? __check_object_size+0x3f3/0x540
[ 166.962654] [ T1161] _copy_to_user+0x1e/0x80
[ 166.962656] [ T1161] relay_file_read+0x149/0x8a0
[ 166.962661] [ T1161] ? selinux_file_permission+0x36d/0x420
[ 166.962665] [ T1161] full_proxy_read+0x110/0x1d0
[ 166.962667] [ T1161] ? rw_verify_area+0x2f7/0x520
[ 166.962670] [ T1161] vfs_read+0x1bb/0xa20
[ 166.962672] [ T1161] ? __pfx___mutex_lock+0x10/0x10
[ 166.962674] [ T1161] ? __pfx_lock_release+0x10/0x10
[ 166.962677] [ T1161] ? __pfx_vfs_read+0x10/0x10
[ 166.962683] [ T1161] ? __fget_files+0x1ae/0x2e0
[ 166.962687] [ T1161] ksys_read+0xf6/0x1c0
[ 166.962689] [ T1161] ? __pfx_ksys_read+0x10/0x10
[ 166.962693] [ T1161] do_syscall_64+0x93/0x180
[ 166.962697] [ T1161] ? lockdep_hardirqs_on_prepare+0x16d/0x400
[ 166.962700] [ T1161] ? do_syscall_64+0x9f/0x180
[ 166.962702] [ T1161] ? lockdep_hardirqs_on+0x78/0x100
[ 166.962704] [ T1161] ? do_syscall_64+0x9f/0x180
[ 166.962708] [ T1161] ? lockdep_hardirqs_on_prepare+0x16d/0x400
[ 166.962711] [ T1161] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 166.962714] [ T1161] RIP: 0033:0x7fdddfde0e4a
[ 166.962719] [ T1161] Code: 55 48 89 e5 48 83 ec 20 48 89 55 e8 48 89 75 f0 89 7d f8 e8 28 58 f8 ff 48 8b 55 e8 48 8b 75 f0 41 89 c0 8b 7d f8 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 2e 44 89 c7 48 89 45 f8 e8 82 58 f8 ff 48 8b
[ 166.962721] [ T1161] RSP: 002b:00007fdddecccd80 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[ 166.962726] [ T1161] RAX: ffffffffffffffda RBX: 00007fddd4000bb0 RCX: 00007fdddfde0e4a
[ 166.962727] [ T1161] RDX: 0000000000080000 RSI: 00007fdddc400000 RDI: 0000000000000006
[ 166.962729] [ T1161] RBP: 00007fdddecccda0 R08: 0000000000000000 R09: 0000000000000000
[ 166.962730] [ T1161] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000000
[ 166.962731] [ T1161] R13: 000056530426caa0 R14: 0000000000000000 R15: 00007fddd4002c90
[ 166.962735] [ T1161] </TASK>
[ 167.066759] [ T1103] sd 9:0:0:0: [sdd] Synchronizing SCSI cache
More information about the Linux-nvme
mailing list