Recursive locking complaint with nvme-5.13 branch
Christoph Hellwig
hch at infradead.org
Thu Apr 1 16:37:22 BST 2021
On Wed, Mar 31, 2021 at 09:03:36PM -0700, Bart Van Assche wrote:
> Hi,
>
> If I boot a VM with the nvme-5.13 branch (commit 24e238c92186
> ("nvme: warn of unhandled effects only once")) then the complaint
> shown below is reported. Is this a known issue?
This looks like someone is trying to open a nvme device as the backing
device for pktcdvd? In that case this is a different bd_mutex. But
I'm really curious why systemd would do that.
>
> Thanks,
>
> Bart.
>
>
> ============================================
> WARNING: possible recursive locking detected
> 5.12.0-rc3-dbg+ #6 Not tainted
> --------------------------------------------
> systemd-udevd/299 is trying to acquire lock:
> ffff88811b1e80a0 (&bdev->bd_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x85/0x350
>
> but task is already holding lock:
> ffff8881134100a0 (&bdev->bd_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x1a9/0x350
>
> other info that might help us debug this:
> Possible unsafe locking scenario:
>
> CPU0
> ----
> lock(&bdev->bd_mutex);
> lock(&bdev->bd_mutex);
>
> *** DEADLOCK ***
>
> May be due to missing lock nesting notation
>
> 3 locks held by systemd-udevd/299:
> #0: ffff8881134100a0 (&bdev->bd_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x1a9/0x350
> #1: ffffffffa10269c8 (pktcdvd_mutex){+.+.}-{3:3}, at: pkt_open+0x22/0x15a [pktcdvd]
> #2: ffffffffa1025788 (&ctl_mutex#2){+.+.}-{3:3}, at: pkt_open+0x30/0x15a [pktcdvd]
>
> stack backtrace:
> CPU: 6 PID: 299 Comm: systemd-udevd Not tainted 5.12.0-rc3-dbg+ #6
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a-rebuilt.opensuse.org 04/01/2014
> Call Trace:
> show_stack+0x52/0x58
> dump_stack+0x9d/0xcf
> print_deadlock_bug.cold+0x131/0x136
> validate_chain+0x6d3/0xc70
> ? check_prev_add+0x11d0/0x11d0
> __lock_acquire+0x500/0x920
> ? start_flush_work+0x375/0x510
> ? __this_cpu_preempt_check+0x13/0x20
> lock_acquire.part.0+0x117/0x210
> ? blkdev_get_by_dev+0x85/0x350
> ? rcu_read_unlock+0x50/0x50
> ? __this_cpu_preempt_check+0x13/0x20
> ? lock_is_held_type+0xdb/0x130
> lock_acquire+0x9b/0x1a0
> ? blkdev_get_by_dev+0x85/0x350
> __mutex_lock+0x117/0xb60
> ? blkdev_get_by_dev+0x85/0x350
> ? blkdev_get_by_dev+0x85/0x350
> ? mutex_lock_io_nested+0xa70/0xa70
> ? __kasan_check_write+0x14/0x20
> ? __mutex_unlock_slowpath+0xa7/0x290
> ? __ww_mutex_check_kill+0x160/0x160
> ? trace_hardirqs_on+0x2b/0x130
> ? mutex_unlock+0x12/0x20
> ? disk_block_events+0x92/0xc0
> mutex_lock_nested+0x1b/0x20
> blkdev_get_by_dev+0x85/0x350
> ? __mutex_lock+0x49c/0xb60
> pkt_open_dev+0x7f/0x370 [pktcdvd]
> ? pkt_open_write+0x120/0x120 [pktcdvd]
> ? __ww_mutex_check_kill+0x160/0x160
> pkt_open+0xfd/0x15a [pktcdvd]
> __blkdev_get+0xa3/0x450
> blkdev_get_by_dev+0x1b4/0x350
> ? __kasan_check_read+0x11/0x20
> blkdev_open+0xa4/0x120
> do_dentry_open+0x27d/0x690
> ? blkdev_get_by_dev+0x350/0x350
> vfs_open+0x58/0x60
> do_open+0x316/0x4a0
> path_openat+0x1b8/0x260
> ? do_tmpfile+0x160/0x160
> ? __this_cpu_preempt_check+0x13/0x20
> do_filp_open+0x12d/0x240
> ? may_open_dev+0x60/0x60
> ? __kasan_check_read+0x11/0x20
> ? do_raw_spin_unlock+0x98/0xf0
> ? preempt_count_sub+0x18/0xc0
> ? _raw_spin_unlock+0x2d/0x50
> do_sys_openat2+0xe9/0x260
> ? build_open_flags+0x2a0/0x2a0
> __x64_sys_openat+0xd3/0x130
> ? __ia32_sys_open+0x110/0x110
> ? __secure_computing+0x74/0x140
> ? syscall_trace_enter.constprop.0+0x71/0x230
> do_syscall_64+0x32/0x80
> entry_SYSCALL_64_after_hwframe+0x44/0xae
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
---end quoted text---
More information about the Linux-nvme
mailing list