[PATCH] block: re-introduce blk_mq_complete_request_sync

Ming Lei ming.lei at redhat.com
Wed Oct 14 05:56:42 EDT 2020


On Wed, Oct 14, 2020 at 05:39:12PM +0800, Chao Leng wrote:
> 
> 
> On 2020/10/14 11:34, Ming Lei wrote:
> > On Wed, Oct 14, 2020 at 09:08:28AM +0800, Ming Lei wrote:
> > > On Tue, Oct 13, 2020 at 03:36:08PM -0700, Sagi Grimberg wrote:
> > > > 
> > > > > > > This may just reduce the probability. The concurrency of timeout
> > > > > > > and teardown will cause the same request
> > > > > > > be treated repeatly, this is not we expected.
> > > > > > 
> > > > > > That is right, not like SCSI, NVME doesn't apply atomic request
> > > > > > completion, so
> > > > > > request may be completed/freed from both timeout & nvme_cancel_request().
> > > > > > 
> > > > > > .teardown_lock still may cover the race with Sagi's patch because
> > > > > > teardown
> > > > > > actually cancels requests in sync style.
> > > > > In extreme scenarios, the request may be already retry success(rq state
> > > > > change to inflight).
> > > > > Timeout processing may wrongly stop the queue and abort the request.
> > > > > teardown_lock serialize the process of timeout and teardown, but do not
> > > > > avoid the race.
> > > > > It might not be safe.
> > > > 
> > > > Not sure I understand the scenario you are describing.
> > > > 
> > > > what do you mean by "In extreme scenarios, the request may be already retry
> > > > success(rq state change to inflight)"?
> > > > 
> > > > What will retry the request? only when the host will reconnect
> > > > the request will be retried.
> > > > 
> > > > We can call nvme_sync_queues in the last part of the teardown, but
> > > > I still don't understand the race here.
> > > 
> > > Not like SCSI, NVME doesn't complete request atomically, so double
> > > completion/free can be done from both timeout & nvme_cancel_request()(via teardown).
> > > 
> > > Given request is completed remotely or asynchronously in the two code paths,
> > > the teardown_lock can't protect the case.
> > 
> > Thinking of the issue further, the race shouldn't be between timeout and
> > teardown.
> > 
> > Both nvme_cancel_request() and nvme_tcp_complete_timed_out() are called
> > with .teardown_lock, and both check if the request is completed before
> > calling blk_mq_complete_request() which marks the request as COMPLETE state.
> > So the request shouldn't be double-freed in the two code paths.
> > 
> > Another possible reason is that between timeout and normal completion(fail
> > fast pending requests after ctrl state is updated to CONNECTING).
> > 
> > Yi, can you try the following patch and see if the issue is fixed?
> > 
> > diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> > index d6a3e1487354..fab9220196bd 100644
> > --- a/drivers/nvme/host/tcp.c
> > +++ b/drivers/nvme/host/tcp.c
> > @@ -1886,7 +1886,6 @@ static int nvme_tcp_configure_admin_queue(struct nvme_ctrl *ctrl, bool new)
> >   static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
> >   		bool remove)
> >   {
> > -	mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
> >   	blk_mq_quiesce_queue(ctrl->admin_q);
> >   	nvme_tcp_stop_queue(ctrl, 0);
> >   	if (ctrl->admin_tagset) {
> > @@ -1897,15 +1896,13 @@ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
> >   	if (remove)
> >   		blk_mq_unquiesce_queue(ctrl->admin_q);
> >   	nvme_tcp_destroy_admin_queue(ctrl, remove);
> > -	mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
> >   }
> >   static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
> >   		bool remove)
> >   {
> > -	mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
> >   	if (ctrl->queue_count <= 1)
> > -		goto out;
> > +		return;
> >   	blk_mq_quiesce_queue(ctrl->admin_q);
> >   	nvme_start_freeze(ctrl);
> >   	nvme_stop_queues(ctrl);
> > @@ -1918,8 +1915,6 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
> >   	if (remove)
> >   		nvme_start_queues(ctrl);
> >   	nvme_tcp_destroy_io_queues(ctrl, remove);
> > -out:
> > -	mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
> >   }
> >   static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl)
> > @@ -2030,11 +2025,11 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
> >   	struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
> >   	nvme_stop_keep_alive(ctrl);
> > +
> > +	mutex_lock(&tcp_ctrl->teardown_lock);
> >   	nvme_tcp_teardown_io_queues(ctrl, false);
> > -	/* unquiesce to fail fast pending requests */
> > -	nvme_start_queues(ctrl);
> >   	nvme_tcp_teardown_admin_queue(ctrl, false);
> > -	blk_mq_unquiesce_queue(ctrl->admin_q);
> Delete blk_mq_unquiesce_queue will cause a bug which may cause reconnect failed.
> Delete nvme_start_queues may cause another bug.

nvme_tcp_setup_ctrl() will re-start io and admin queue, and only .connect_q
and .fabrics_q are required during reconnect.

So can you explain in detail about the bug?

Thanks,
Ming




More information about the Linux-nvme mailing list