[PATCH v9 06/11] io_uring: introduce attributes for read/write and PI support
Darrick J. Wong
djwong at kernel.org
Wed Nov 20 09:35:17 PST 2024
On Fri, Nov 15, 2024 at 06:04:01PM +0000, Matthew Wilcox wrote:
> On Thu, Nov 14, 2024 at 01:09:44PM +0000, Pavel Begunkov wrote:
> > With SQE128 it's also a problem that now all SQEs are 128 bytes regardless
> > of whether a particular request needs it or not, and the user will need
> > to zero them for each request.
>
> The way we handled this in NVMe was to use a bit in the command that
> was called (iirc) FUSED, which let you use two consecutive entries for
> a single command.
>
> Some variant on that could surely be used for io_uring. Perhaps a
> special opcode that says "the real opcode is here, and this is a two-slot
> command". Processing gets a little spicy when one slot is the last in
> the buffer and the next is the the first in the buffer, but that's a SMOP.
I like willy's suggestion -- what's the difficulty in having a SQE flag
that says "...and keep going into the next SQE"? I guess that
introduces the problem that you can no longer react to the observation
of 4 new SQEs by creating 4 new contexts to process those SQEs and throw
all 4 of them at background threads, since you don't know how many IOs
are there.
That said, depending on the size of the PI metadata, it might be more
convenient for the app programmer to supply one pointer to a single
array of PI information for the entire IO request, packed in whatever
format the underlying device wants.
Thinking with my xfs(progs) hat on, if we ever wanted to run xfs_buf(fer
cache) IOs through io_uring with PI metadata, we'd probably want a
vectored io submission interface (xfs_buffers can map to discontiguous
LBA ranges on disk), but we'd probably have a single memory object to
hold all the PI information.
But really, AFAICT it's 6 of one or half a dozen of the other, so I
don't care all that much so long as you all pick something and merge it.
:)
--D
More information about the Linux-nvme
mailing list