[PATCH for-next v3 4/4] nvme: wire up async polling for io passthrough commands
Ming Lei
ming.lei at redhat.com
Tue Aug 8 18:15:57 PDT 2023
Hi Kanchan,
On Tue, Aug 23, 2022 at 09:44:43PM +0530, Kanchan Joshi wrote:
> Store a cookie during submission, and use that to implement
> completion-polling inside the ->uring_cmd_iopoll handler.
> This handler makes use of existing bio poll facility.
>
> Signed-off-by: Kanchan Joshi <joshi.k at samsung.com>
> Signed-off-by: Anuj Gupta <anuj20.g at samsung.com>
> ---
...
>
> +int nvme_ns_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd)
> +{
> + struct bio *bio;
> + int ret = 0;
> + struct nvme_ns *ns;
> + struct request_queue *q;
> +
> + rcu_read_lock();
> + bio = READ_ONCE(ioucmd->cookie);
> + ns = container_of(file_inode(ioucmd->file)->i_cdev,
> + struct nvme_ns, cdev);
> + q = ns->queue;
> + if (test_bit(QUEUE_FLAG_POLL, &q->queue_flags) && bio && bio->bi_bdev)
> + ret = bio_poll(bio, NULL, 0);
> + rcu_read_unlock();
> + return ret;
> +}
It looks not good to call bio_poll() with holding rcu read lock,
since set_page_dirty_lock() may sleep from end_io code path.
blk_rq_unmap_user
bio_release_pages
__bio_release_pages
set_page_dirty_lock
lock_page
Probably you need to move dirtying pages into wq context, such as
bio_check_pages_dirty(), then I guess pt io poll perf may drop.
Maybe we need to investigate how to remove the rcu read lock here.
> #ifdef CONFIG_NVME_MULTIPATH
> static int nvme_ns_head_ctrl_ioctl(struct nvme_ns *ns, unsigned int cmd,
> void __user *argp, struct nvme_ns_head *head, int srcu_idx)
> @@ -685,6 +721,29 @@ int nvme_ns_head_chr_uring_cmd(struct io_uring_cmd *ioucmd,
> srcu_read_unlock(&head->srcu, srcu_idx);
> return ret;
> }
> +
> +int nvme_ns_head_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd)
> +{
> + struct cdev *cdev = file_inode(ioucmd->file)->i_cdev;
> + struct nvme_ns_head *head = container_of(cdev, struct nvme_ns_head, cdev);
> + int srcu_idx = srcu_read_lock(&head->srcu);
> + struct nvme_ns *ns = nvme_find_path(head);
> + struct bio *bio;
> + int ret = 0;
> + struct request_queue *q;
> +
> + if (ns) {
> + rcu_read_lock();
> + bio = READ_ONCE(ioucmd->cookie);
> + q = ns->queue;
> + if (test_bit(QUEUE_FLAG_POLL, &q->queue_flags) && bio
> + && bio->bi_bdev)
> + ret = bio_poll(bio, NULL, 0);
> + rcu_read_unlock();
Same with above.
thanks,
Ming
More information about the Linux-nvme
mailing list