[PATCH v2 3/3] nvme-rdma: Handle number of queue changes
Daniel Wagner
dwagner at suse.de
Thu Aug 25 23:30:24 PDT 2022
On Fri, Aug 26, 2022 at 09:10:04AM +0800, Chao Leng wrote:
> On 2022/8/25 18:55, Daniel Wagner wrote:
> > On Thu, Aug 25, 2022 at 06:08:10PM +0800, Chao Leng wrote:
> > > > + /*
> > > > + * If the number of queues has increased (reconnect case)
> > > > + * start all new queues now.
> > > > + */
> > > > + ret = nvme_rdma_start_io_queues(ctrl, nr_queues,
> > > > + ctrl->tag_set.nr_hw_queues + 1);
> > > > + if (ret)
> > > > + goto out_cleanup_connect_q;
> > > > +
> > > Now the code looks weird.
> > > Maybe we can do like this:
> > > first blk_mq_update_nr_hw_queues, and then nvme_rdma_start_io_queues.
> >
> > We have to start the exiting queues before going into the 'if (!new)'
> > part. That's why the start of queues is splited into two steps.
> Indeed it is not necessary.
> It's just a little negative: some request will failed, and then retry
> or failover. I think it is acceptable.
The first version made nvme_rdma_start_io_queues() re-entrant and hence
we just could call nvme_rdma_start_io_queues() twice without the max
queue logic here.
After seeing both version I tend to do say the first one keeps the
'wierd' stuff more closer together and doesn't make the callside of
nvme_rdma_start_io_queues() do the counting. So my personal preference
is to go with v1.
Maybe there is another way but I haven't figured it out yet.
More information about the Linux-nvme
mailing list