[PATCH 5/5] nvme: enable batched completions of passthrough IO
Jens Axboe
axboe at kernel.dk
Mon Sep 26 18:44:20 PDT 2022
Now that the normal passthrough end_io path doesn't need the request
anymore, we can kill the explicit blk_mq_free_request() and just pass
back RQ_END_IO_FREE instead. This enables the batched completion from
freeing batches of requests at the time.
This brings passthrough IO performance at least on par with bdev based
O_DIRECT with io_uring. With this and batche allocations, peak performance
goes from 110M IOPS to 122M IOPS. For IRQ based, passthrough is now also
about 10% faster than previously, going from ~61M to ~67M IOPS.
Co-developed-by: Stefan Roesch <shr at fb.com>
Signed-off-by: Jens Axboe <axboe at kernel.dk>
---
drivers/nvme/host/ioctl.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index 9e356a6c96c2..d9633f426690 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -423,8 +423,7 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req,
else
io_uring_cmd_complete_in_task(ioucmd, nvme_uring_task_cb);
- blk_mq_free_request(req);
- return RQ_END_IO_NONE;
+ return RQ_END_IO_FREE;
}
static enum rq_end_io_ret nvme_uring_cmd_end_io_meta(struct request *req,
--
2.35.1
More information about the Linux-nvme
mailing list