[LSF/MM/BPF ATTEND][LSF/MM/BPF Topic] Non-block IO

Ming Lei ming.lei at redhat.com
Tue Apr 11 19:33:40 PDT 2023


On Wed, Apr 12, 2023 at 04:18:16AM +0530, Kanchan Joshi wrote:
> > > 4. Direct NVMe queues - will there be interest in having io_uring
> > > managed NVMe queues?  Sort of a new ring, for which I/O is destaged from
> > > io_uring SQE to NVMe SQE without having to go through intermediate
> > > constructs (i.e., bio/request). Hopefully,that can further amp up the
> > > efficiency of IO.
> >
> > This is interesting, and I've pondered something like that before too. I
> > think it's worth investigating and hacking up a prototype. I recently
> > had one user of IOPOLL assume that setting up a ring with IOPOLL would
> > automatically create a polled queue on the driver side and that is what
> > would be used for IO. And while that's not how it currently works, it
> > definitely does make sense and we could make some things faster like
> > that. It would also potentially easier enable cancelation referenced in
> > #1 above, if it's restricted to the queue(s) that the ring "owns".
> 
> So I am looking at prototyping it, exclusively for the polled-io case.
> And for that, is there already a way to ensure that there are no
> concurrent submissions to this ring (set with IORING_SETUP_IOPOLL
> flag)?
> That will be the case generally (and submissions happen under
> uring_lock mutex), but submission may still get punted to io-wq
> worker(s) which do not take that mutex.
> So the original task and worker may get into doing concurrent submissions.

It seems one defect for uring command support, since io_ring_ctx and
io_ring_submit_lock() can't be exported for driver.

It could be triggered if the request is in one link chain too.

Probably the issue may be workaround by:

	if (issue_flags & IO_URING_F_UNLOCKED)
		io_uring_cmd_complete_in_task(task_work_cb);


Thanks,
Ming




More information about the Linux-nvme mailing list