[PATCH] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll
Ming Lei
ming.lei at redhat.com
Wed Aug 9 00:53:35 PDT 2023
On Wed, Aug 09, 2023 at 12:29:20PM +0530, Kanchan Joshi wrote:
> On Wed, Aug 09, 2023 at 10:04:40AM +0800, Ming Lei wrote:
> > Now nvme_ns_chr_uring_cmd_iopoll() has switched to request based io
> > polling, and the associated NS is guaranteed to be live in case of
> > io polling, so request is guaranteed to be valid because blk-mq uses
> > pre-allocated request pool.
> >
> > Remove the rcu read lock in nvme_ns_chr_uring_cmd_iopoll(), which
> > isn't needed any more after switching to request based io polling.
>
> > Fix "BUG: sleeping function called from invalid context" because
> > set_page_dirty_lock() from blk_rq_unmap_user() may sleep.
> >
> > Fixes: 585079b6e425 ("nvme: wire up async polling for io passthrough commands")
> > Reported-by: Guangwu Zhang <guazhang at redhat.com>
>
> Thanks Ming. Looks fine, but any link to this report?
> I don't see this breaking in my tests. So I wonder how to reproduce and
> improve the coverage.
It is reported in RH BZ2227639, and follows the stack trace:
[ 3286.960425] BUG: sleeping function called from invalid context at include/linux/pagemap.h:914
[ 3286.960434] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 530910, name: fio
[ 3286.960440] preempt_count: 1, expected: 0
[ 3286.960443] RCU nest depth: 1, expected: 0
[ 3286.960446] 3 locks held by fio/530910:
[ 3286.960450] #0: ffff8881108e40b0 (&ctx->uring_lock){+.+.}-{3:3}, at: __do_sys_io_uring_enter+0x535/0x980
[ 3286.960476] #1: ffffffff9b72a320 (rcu_read_lock){....}-{1:2}, at: nvme_ns_chr_uring_cmd_iopoll+0x5/0x270 [nvme_core]
[ 3286.960530] #2: ffff88837937b098 (&nvmeq->cq_poll_lock){+.+.}-{2:2}, at: nvme_poll+0x129/0x180 [nvme]
[ 3286.960553] Preemption disabled at:
[ 3286.960555] [<0000000000000000>] 0x0
[ 3286.960691] CPU: 1 PID: 530910 Comm: fio Kdump: loaded Tainted: G W L X ------- --- 5.14.0-345.el9.x86_64+debug #1
[ 3286.960700] Hardware name: Dell Inc. PowerEdge R640/06DKY5, BIOS 2.15.1 06/15/2022
[ 3286.960704] Call Trace:
[ 3286.960707] <TASK>
[ 3286.960720] dump_stack_lvl+0x57/0x81
[ 3286.960734] __might_resched.cold+0x222/0x26b
[ 3286.960756] set_page_dirty_lock+0x1d/0x130
[ 3286.960773] __bio_release_pages+0x266/0x470
[ 3286.960811] blk_rq_unmap_user+0x2a8/0x660
[ 3286.960824] ? lock_acquire+0x1d8/0x640
[ 3286.960839] ? sched_clock_cpu+0x15/0x1b0
[ 3286.960850] ? find_held_lock+0x33/0x120
[ 3286.960870] ? __pfx_blk_rq_unmap_user+0x10/0x10
[ 3286.960876] ? __lock_release+0x4c1/0xa00
[ 3286.960894] ? __pfx___lock_release+0x10/0x10
[ 3286.960908] ? mark_held_locks+0xa5/0xf0
[ 3286.960938] nvme_uring_cmd_end_io+0x204/0x300 [nvme_core]
[ 3286.960974] ? __pfx_nvme_uring_cmd_end_io+0x10/0x10 [nvme_core]
[ 3286.961020] __blk_mq_end_request+0xf6/0x4c0
[ 3286.961042] nvme_poll_cq+0x71e/0xe40 [nvme]
[ 3286.961102] nvme_poll+0x134/0x180 [nvme]
[ 3286.961121] blk_mq_poll_classic+0x179/0x420
[ 3286.961153] bio_poll+0x1f5/0x440
[ 3286.961182] nvme_ns_chr_uring_cmd_iopoll+0x16f/0x270 [nvme_core]
Thanks,
Ming
More information about the Linux-nvme
mailing list