[PATCH 2/2] nvme: use blk-mq polling for uring commands

Kanchan Joshi joshi.k at samsung.com
Wed Mar 29 01:46:18 PDT 2023


On Tue, Mar 28, 2023 at 08:52:53AM -0600, Keith Busch wrote:
>On Tue, Mar 28, 2023 at 01:19:39PM +0530, Kanchan Joshi wrote:
>> On Mon, Mar 27, 2023 at 06:48:30PM -0600, Keith Busch wrote:
>> > On Mon, Mar 27, 2023 at 10:50:47PM +0530, Kanchan Joshi wrote:
>> > > On Mon, Mar 27, 2023 at 8:59 PM Keith Busch <kbusch at kernel.org> wrote:
>> > > > > >     rcu_read_lock();
>> > > > > > -   bio = READ_ONCE(ioucmd->cookie);
>> > > > > > -   ns = container_of(file_inode(ioucmd->file)->i_cdev,
>> > > > > > -                   struct nvme_ns, cdev);
>> > > > > > -   q = ns->queue;
>> > > > > > -   if (test_bit(QUEUE_FLAG_POLL, &q->queue_flags) && bio && bio->bi_bdev)
>> > > > > > -           ret = bio_poll(bio, iob, poll_flags);
>> > > > > > +   req = READ_ONCE(ioucmd->cookie);
>> > > > > > +   if (req) {
>> > > > >
>> > > > > This is risky. We are not sure if the cookie is actually "req" at this
>> > > > > moment.
>> > > >
>> > > > What else could it be? It's either a real request from a polled hctx tag, or
>> > > > NULL at this point.
>> > >
>> > > It can also be a function pointer that gets assigned on irq-driven completion.
>> > > See the "struct io_uring_cmd" - we are tight on cacheline, so cookie
>> > > and task_work_cb share the storage.
>> > >
>> > > > It's safe to check the cookie like this and rely on its contents.
>> > > Hence not safe. Please try running this without poll-queues (at nvme
>> > > level), you'll see failures.
>> >
>> > Okay, you have a iouring polling instance used with a file that has poll
>> > capabilities, but doesn't have any polling hctx's. It would be nice to exclude
>> > these from io_uring's polling since they're wasting CPU time, but that doesn't
>> > look easily done.
>>
>> Do you mean having the ring with IOPOLL set, and yet skip the attempt of
>> actively reaping the completion for certain IOs?
>
>Yes, exactly. It'd be great if non-polled requests don't get added to the
>ctx->iopoll_list in the first place.
>
>> > This simple patch atop should work though.
>> >
>> > ---
>> > diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
>> > index 369e8519b87a2..e3ff019404816 100644
>> > --- a/drivers/nvme/host/ioctl.c
>> > +++ b/drivers/nvme/host/ioctl.c
>> > @@ -612,6 +612,8 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
>> >
>> > 	if (blk_rq_is_poll(req))
>> > 		WRITE_ONCE(ioucmd->cookie, req);
>> > +	else if (issue_flags & IO_URING_F_IOPOLL)
>> > +		ioucmd->flags |= IORING_URING_CMD_NOPOLL;
>>
>> If IO_URING_F_IOPOLL would have come here as part of "ioucmd->flags", we
>> could have just cleared that here. That would avoid the need of NOPOLL flag.
>> That said, I don't feel strongly about new flag too. You decide.
>
>IO_URING_F_IOPOLL, while named in an enum that sounds suspiciouly like it is
>part of ioucmd->flags, is actually ctx flags, so a little confusing. And we
>need to be a litle careful here: the existing ioucmd->flags is used with uapi
>flags.

Indeed. If this is getting crufty, series can just enable polling on
no-payload requests. Reducing nvme handlers - for another day.


More information about the Linux-nvme mailing list