[LSF/MM/BPF ATTEND][LSF/MM/BPF Topic] Non-block IO
Kanchan Joshi
joshi.k at samsung.com
Wed Apr 12 06:26:15 PDT 2023
On Wed, Apr 12, 2023 at 10:33:40AM +0800, Ming Lei wrote:
>On Wed, Apr 12, 2023 at 04:18:16AM +0530, Kanchan Joshi wrote:
>> > > 4. Direct NVMe queues - will there be interest in having io_uring
>> > > managed NVMe queues? Sort of a new ring, for which I/O is destaged from
>> > > io_uring SQE to NVMe SQE without having to go through intermediate
>> > > constructs (i.e., bio/request). Hopefully,that can further amp up the
>> > > efficiency of IO.
>> >
>> > This is interesting, and I've pondered something like that before too. I
>> > think it's worth investigating and hacking up a prototype. I recently
>> > had one user of IOPOLL assume that setting up a ring with IOPOLL would
>> > automatically create a polled queue on the driver side and that is what
>> > would be used for IO. And while that's not how it currently works, it
>> > definitely does make sense and we could make some things faster like
>> > that. It would also potentially easier enable cancelation referenced in
>> > #1 above, if it's restricted to the queue(s) that the ring "owns".
>>
>> So I am looking at prototyping it, exclusively for the polled-io case.
>> And for that, is there already a way to ensure that there are no
>> concurrent submissions to this ring (set with IORING_SETUP_IOPOLL
>> flag)?
>> That will be the case generally (and submissions happen under
>> uring_lock mutex), but submission may still get punted to io-wq
>> worker(s) which do not take that mutex.
>> So the original task and worker may get into doing concurrent submissions.
>
>It seems one defect for uring command support, since io_ring_ctx and
>io_ring_submit_lock() can't be exported for driver.
Sorry, did not follow the defect part.
io-wq not acquring uring_lock in case of uring-cmd - is a defect?
The same happens for direct block-io too.
Or do you mean anything else here?
More information about the Linux-nvme
mailing list