[PATCH 05/17] nvme: wire-up support for async-passthru on char-device.
Kanchan Joshi
joshiiitr at gmail.com
Thu Mar 24 10:45:12 PDT 2022
On Thu, Mar 24, 2022 at 11:52 AM Christoph Hellwig <hch at lst.de> wrote:
>
> On Wed, Mar 16, 2022 at 12:57:27PM +0530, Kanchan Joshi wrote:
> > So what is the picture that you have in mind for struct io_uring_cmd?
> > Moving meta fields out makes it look like this -
>
>
> > @@ -28,7 +28,10 @@ struct io_uring_cmd {
> > u32 cmd_op;
> > u16 cmd_len;
> > u16 unused;
> > - u8 pdu[28]; /* available inline for free use */
> > + void __user *meta_buffer; /* nvme pt specific */
> > + u32 meta_len; /* nvme pt specific */
> > + u8 pdu[16]; /* available inline for free use */
> > +
> > };
> > And corresponding nvme 16 byte pdu - struct nvme_uring_cmd_pdu {
> > - u32 meta_len;
> > union {
> > struct bio *bio;
> > struct request *req;
> > };
> > void *meta; /* kernel-resident buffer */
> > - void __user *meta_buffer;
> > } __packed;
>
> No, I'd also move the meta field (and call it meta_buffer) to
> struct io_uring_cmd, and replace the pdu array with a simple
>
> void *private;
That clears up. Can go that route, but the tradeoff is -
while we clean up one casting in nvme, we end up making async-cmd way
too nvme-passthrough specific.
People have talked about using async-cmd for other use cases; Darrick
mentioned using for xfs-scrub, and Luis had some ideas (other than
nvme) too.
The pdu array of 28 bytes is being used to avoid fast path
allocations. It got reduced to 8 bytes, and that is fine for one
nvme-ioctl as we moved other fields out.
But for other use-cases, 8 bytes of generic space may not be enough to
help with fast-path allocations.
More information about the Linux-nvme
mailing list