[PATCH v2 3/3] nvme-rdma: Handle number of queue changes
Chao Leng
lengchao at huawei.com
Thu Aug 25 03:08:10 PDT 2022
On 2022/8/23 15:44, Daniel Wagner wrote:
> On reconnect, the number of queues might have changed.
>
> In the case where we have more queues available than previously we try
> to access queues which are not initialized yet.
>
> The other case where we have less queues than previously, the
> connection attempt will fail because the target doesn't support the
> old number of queues and we end up in a reconnect loop.
>
> Thus, only start queues which are currently present in the tagset
> limited by the number of available queues. Then we update the tagset
> and we can start any new queue.
>
> Signed-off-by: Daniel Wagner <dwagner at suse.de>
> ---
> drivers/nvme/host/rdma.c | 26 +++++++++++++++++++++-----
> 1 file changed, 21 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index 3100643be299..386674d7c0e6 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -696,11 +696,12 @@ static int nvme_rdma_start_queue(struct nvme_rdma_ctrl *ctrl, int idx)
> return ret;
> }
>
> -static int nvme_rdma_start_io_queues(struct nvme_rdma_ctrl *ctrl)
> +static int nvme_rdma_start_io_queues(struct nvme_rdma_ctrl *ctrl,
> + int first, int last)
> {
> int i, ret = 0;
>
> - for (i = 1; i < ctrl->ctrl.queue_count; i++) {
> + for (i = first; i < last; i++) {
> ret = nvme_rdma_start_queue(ctrl, i);
> if (ret)
> goto out_stop_queues;
> @@ -709,7 +710,7 @@ static int nvme_rdma_start_io_queues(struct nvme_rdma_ctrl *ctrl)
> return 0;
>
> out_stop_queues:
> - for (i--; i >= 1; i--)
> + for (i--; i >= first; i--)
> nvme_rdma_stop_queue(&ctrl->queues[i]);
> return ret;
> }
> @@ -964,7 +965,7 @@ static void nvme_rdma_destroy_io_queues(struct nvme_rdma_ctrl *ctrl,
>
> static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
> {
> - int ret;
> + int ret, nr_queues;
>
> ret = nvme_rdma_alloc_io_queues(ctrl);
> if (ret)
> @@ -980,7 +981,13 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
> goto out_free_tag_set;
> }
>
> - ret = nvme_rdma_start_io_queues(ctrl);
> + /*
> + * Only start IO queues for which we have allocated the tagset
> + * and limitted it to the available queues. On reconnects, the
> + * queue number might have changed.
> + */
> + nr_queues = min(ctrl->tag_set.nr_hw_queues + 1, nctrl->queue_count);
> + ret = nvme_rdma_start_io_queues(ctrl, 1, nr_queues);
> if (ret)
> goto out_cleanup_connect_q;
>
> @@ -1000,6 +1007,15 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
> nvme_unfreeze(&ctrl->ctrl);
> }
>
> + /*
> + * If the number of queues has increased (reconnect case)
> + * start all new queues now.
> + */
> + ret = nvme_rdma_start_io_queues(ctrl, nr_queues,
> + ctrl->tag_set.nr_hw_queues + 1);
> + if (ret)
> + goto out_cleanup_connect_q;
> +
Now the code looks weird.
Maybe we can do like this:
first blk_mq_update_nr_hw_queues, and then nvme_rdma_start_io_queues.
> return 0;
>
> out_wait_freeze_timed_out:
>
More information about the Linux-nvme
mailing list