[PATCH v4 1/5] fs,io_uring: add infrastructure for uring-cmd

Kanchan Joshi joshi.k at samsung.com
Fri May 6 00:12:16 PDT 2022


On Thu, May 05, 2022 at 10:17:39AM -0600, Jens Axboe wrote:
>On 5/5/22 12:06 AM, Kanchan Joshi wrote:
>> +static int io_uring_cmd_prep(struct io_kiocb *req,
>> +			     const struct io_uring_sqe *sqe)
>> +{
>> +	struct io_uring_cmd *ioucmd = &req->uring_cmd;
>> +	struct io_ring_ctx *ctx = req->ctx;
>> +
>> +	if (ctx->flags & IORING_SETUP_IOPOLL)
>> +		return -EOPNOTSUPP;
>> +	/* do not support uring-cmd without big SQE/CQE */
>> +	if (!(ctx->flags & IORING_SETUP_SQE128))
>> +		return -EOPNOTSUPP;
>> +	if (!(ctx->flags & IORING_SETUP_CQE32))
>> +		return -EOPNOTSUPP;
>> +	if (sqe->ioprio || sqe->rw_flags)
>> +		return -EINVAL;
>> +	ioucmd->cmd = sqe->cmd;
>> +	ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
>> +	return 0;
>> +}
>
>While looking at the other suggested changes, I noticed a more
>fundamental issue with the passthrough support. For any other command,
>SQE contents are stable once prep has been done. The above does do that
>for the basic items, but this case is special as the lower level command
>itself resides in the SQE.
>
>For cases where the command needs deferral, it's problematic. There are
>two main cases where this can happen:
>
>- The issue attempt yields -EAGAIN (we ran out of requests, etc). If you
>  look at other commands, if they have data that doesn't fit in the
>  io_kiocb itself, then they need to allocate room for that data and have
>  it be persistent
>
>- Deferral is specified by the application, using eg IOSQE_IO_LINK or
>  IOSQE_ASYNC.
>
>We're totally missing support for both of these cases. Consider the case
>where the ring is setup with an SQ size of 1. You prep a passthrough
>command (request A) and issue it with io_uring_submit(). Due to one of
>the two above mentioned conditions, the internal request is deferred.
>Either it was sent to ->uring_cmd() but we got -EAGAIN, or it was
>deferred even before that happened. The application doesn't know this
>happened, it gets another SQE to submit a new request (request B). Fills
>it in, calls io_uring_submit(). Since we only have one SQE available in
>that ring, when request A gets re-issued, it's now happily reading SQE
>contents from command B. Oops.
>
>This is why prep handlers are the only ones that get an sqe passed to
>them. They are supposed to ensure that we no longer read from the SQE
>past that. Applications can always rely on that fact that once
>io_uring_submit() has been done, which consumes the SQE in the SQ ring,
>that no further reads are done from that SQE.
>
Thanks for explaining; gives great deal of clarity.
Are there already some tests (liburing, fio etc.) that you use to test
this part?
Different from what you mentioned, but I was forcing failure scenario by
setting low QD in nvme and pumping commands at higher QD than that. 
But this was just testing that we return failure to usespace (since
deferral was not there). 


More information about the Linux-nvme mailing list