[PATCH 08/10] io_uring/rw: add support to send meta along with read/write
Kanchan Joshi
joshi.k at samsung.com
Mon Apr 29 13:11:32 PDT 2024
On 4/26/2024 7:55 PM, Jens Axboe wrote:
>> diff --git a/io_uring/rw.c b/io_uring/rw.c
>> index 3134a6ece1be..b2c9ac91d5e5 100644
>> --- a/io_uring/rw.c
>> +++ b/io_uring/rw.c
>> @@ -587,6 +623,8 @@ static int kiocb_done(struct io_kiocb *req, ssize_t ret,
>>
>> req->flags &= ~REQ_F_REISSUE;
>> iov_iter_restore(&io->iter, &io->iter_state);
>> + if (unlikely(rw->kiocb.ki_flags & IOCB_USE_META))
>> + iov_iter_restore(&io->meta.iter, &io->iter_meta_state);
>> return -EAGAIN;
>> }
>> return IOU_ISSUE_SKIP_COMPLETE;
> This puzzles me a bit, why is the restore now dependent on
> IOCB_USE_META?
Both saving/restore for meta is under this condition (so seemed natural).
Also, to avoid growing "struct io_async_rw" too much, this patch keeps
keeps meta/iter_meta_state in the same memory as wpq. So doing this
unconditionally can corrupt wpq for buffered io.
>> @@ -768,7 +806,7 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
>> if (!(req->flags & REQ_F_FIXED_FILE))
>> req->flags |= io_file_get_flags(file);
>>
>> - kiocb->ki_flags = file->f_iocb_flags;
>> + kiocb->ki_flags |= file->f_iocb_flags;
>> ret = kiocb_set_rw_flags(kiocb, rw->flags);
>> if (unlikely(ret))
>> return ret;
>> @@ -787,7 +825,8 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
>> if (!(kiocb->ki_flags & IOCB_DIRECT) || !file->f_op->iopoll)
>> return -EOPNOTSUPP;
>>
>> - kiocb->private = NULL;
>> + if (likely(!(kiocb->ki_flags & IOCB_USE_META)))
>> + kiocb->private = NULL;
>> kiocb->ki_flags |= IOCB_HIPRI;
>> kiocb->ki_complete = io_complete_rw_iopoll;
>> req->iopoll_completed = 0;
>
> Why don't we just set ->private generically earlier, eg like we do for
> the ki_flags, rather than have it be a branch in here?
Not sure if I am missing what you have in mind.
But kiocb->private was set before we reached to this point (in
io_rw_meta). So we don't overwrite that here.
>> @@ -853,7 +892,8 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
>> } else if (ret == -EIOCBQUEUED) {
>> return IOU_ISSUE_SKIP_COMPLETE;
>> } else if (ret == req->cqe.res || ret <= 0 || !force_nonblock ||
>> - (req->flags & REQ_F_NOWAIT) || !need_complete_io(req)) {
>> + (req->flags & REQ_F_NOWAIT) || !need_complete_io(req) ||
>> + (kiocb->ki_flags & IOCB_USE_META)) {
>> /* read all, failed, already did sync or don't want to retry */
>> goto done;
>> }
>
> Would it be cleaner to stuff that IOCB_USE_META check in
> need_complete_io(), as that would closer seem to describe why that check
> is there in the first place? With a comment.
Yes, will do.
>> @@ -864,6 +904,12 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
>> * manually if we need to.
>> */
>> iov_iter_restore(&io->iter, &io->iter_state);
>> + if (unlikely(kiocb->ki_flags & IOCB_USE_META)) {
>> + /* don't handle partial completion for read + meta */
>> + if (ret > 0)
>> + goto done;
>> + iov_iter_restore(&io->meta.iter, &io->iter_meta_state);
>> + }
>
> Also seems a bit odd why we need this check here, surely if this is
> needed other "don't do retry IOs" conditions would be the same?
Yes, will revisit.
>> @@ -1053,7 +1099,8 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags)
>> if (ret2 == -EAGAIN && (req->ctx->flags & IORING_SETUP_IOPOLL))
>> goto ret_eagain;
>>
>> - if (ret2 != req->cqe.res && ret2 >= 0 && need_complete_io(req)) {
>> + if (ret2 != req->cqe.res && ret2 >= 0 && need_complete_io(req)
>> + && !(kiocb->ki_flags & IOCB_USE_META)) {
>> trace_io_uring_short_write(req->ctx, kiocb->ki_pos - ret2,
>> req->cqe.res, ret2);
>
> Same here. Would be nice to integrate this a bit nicer rather than have
> a bunch of "oh we also need this extra check here" conditions.
Will look into this too.
>> @@ -1074,12 +1121,33 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags)
>> } else {
>> ret_eagain:
>> iov_iter_restore(&io->iter, &io->iter_state);
>> + if (unlikely(kiocb->ki_flags & IOCB_USE_META))
>> + iov_iter_restore(&io->meta.iter, &io->iter_meta_state);
>> if (kiocb->ki_flags & IOCB_WRITE)
>> io_req_end_write(req);
>> return -EAGAIN;
>> }
>> }
>
> Same question here on the (now) conditional restore.
Did not get the concern. Do you prefer it unconditional.
>> +int io_rw_meta(struct io_kiocb *req, unsigned int issue_flags)
>> +{
>> + struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
>> + struct io_async_rw *io = req->async_data;
>> + struct kiocb *kiocb = &rw->kiocb;
>> + int ret;
>> +
>> + if (!(req->file->f_flags & O_DIRECT))
>> + return -EOPNOTSUPP;
>
> Why isn't this just caught at init time when IOCB_DIRECT is checked?
io_rw_init_file() gets invoked after this, and IOCB_DIRECT check is only
for IOPOLL situation. We want to check/fail it regardless of IOPOLL.
>
>> + kiocb->private = &io->meta;
>> + if (req->opcode == IORING_OP_READ_META)
>> + ret = io_read(req, issue_flags);
>> + else
>> + ret = io_write(req, issue_flags);
>> +
>> + return ret;
>> +}
>
> kiocb->private is a bit of an odd beast, and ownership isn't clear at
> all. It would make the most sense if the owner of the kiocb (eg io_uring
> in this case) owned it, but take a look at eg ocfs2 and see what they do
> with it... I think this would blow up as a result.
Yes, ocfs2 is making use of kiocb->private. But seems that's fine. In
io_uring we use the field only to send the information down. ocfs2 (or
anything else unaware of this interface) may just overwrite the
kiocb->private.
If the lower layer want to support meta exchange, it is supposed to
extract meta-descriptor from kiocb->private before altering it.
This case is same for block direct path too when we are doing polled io.
More information about the Linux-nvme
mailing list