[PATCH v2 3/3] nvme-rdma: Handle number of queue changes

Chao Leng lengchao at huawei.com
Fri Aug 26 00:31:15 PDT 2022



On 2022/8/26 14:30, Daniel Wagner wrote:
> On Fri, Aug 26, 2022 at 09:10:04AM +0800, Chao Leng wrote:
>> On 2022/8/25 18:55, Daniel Wagner wrote:
>>> On Thu, Aug 25, 2022 at 06:08:10PM +0800, Chao Leng wrote:
>>>>> +	/*
>>>>> +	 * If the number of queues has increased (reconnect case)
>>>>> +	 * start all new queues now.
>>>>> +	 */
>>>>> +	ret = nvme_rdma_start_io_queues(ctrl, nr_queues,
>>>>> +					ctrl->tag_set.nr_hw_queues + 1);
>>>>> +	if (ret)
>>>>> +		goto out_cleanup_connect_q;
>>>>> +
>>>> Now the code looks weird.
>>>> Maybe we can do like this:
>>>> first blk_mq_update_nr_hw_queues, and then nvme_rdma_start_io_queues.
>>>
>>> We have to start the exiting queues before going into the 'if (!new)'
>>> part. That's why the start of queues is splited into two steps.
>> Indeed it is not necessary.
>> It's just a little negative: some request will failed, and then retry
>> or failover. I think it is acceptable.
> 
> The first version made nvme_rdma_start_io_queues() re-entrant and hence
> we just could call nvme_rdma_start_io_queues() twice without the max
> queue logic here.
> 
> After seeing both version I tend to do say the first one keeps the
> 'wierd' stuff more closer together and doesn't make the callside of
> nvme_rdma_start_io_queues() do the counting. So my personal preference
I don't understand "do the counting".
Show the code:
---
  drivers/nvme/host/rdma.c | 9 ++++-----
  1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 7d01fb770284..8dfb79726e13 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -980,10 +980,6 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
                         goto out_free_tag_set;
         }

-       ret = nvme_rdma_start_io_queues(ctrl);
-       if (ret)
-               goto out_cleanup_connect_q;
-
         if (!new) {
                 nvme_start_queues(&ctrl->ctrl);
                 if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) {
@@ -1000,13 +996,16 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
                 nvme_unfreeze(&ctrl->ctrl);
         }

+       ret = nvme_rdma_start_io_queues(ctrl);
+       if (ret)
+               goto out_wait_freeze_timed_out;
+
         return 0;

  out_wait_freeze_timed_out:
         nvme_stop_queues(&ctrl->ctrl);
         nvme_sync_io_queues(&ctrl->ctrl);
         nvme_rdma_stop_io_queues(ctrl);
-out_cleanup_connect_q:
         nvme_cancel_tagset(&ctrl->ctrl);
         if (new)
                 blk_cleanup_queue(ctrl->ctrl.connect_q);
-- 
> is to go with v1.
> 
> Maybe there is another way but I haven't figured it out yet.
> .
> 



More information about the Linux-nvme mailing list