Hang at NVME Host caused by Controller reset
Sagi Grimberg
sagi at grimberg.me
Wed Jul 29 05:28:20 EDT 2020
>>>> This time, with "nvme-fabrics: allow to queue requests for live queues"
>>>> patch applied, I see hang only at blk_queue_enter():
>>>
>>> Interesting, does the reset loop hang? or is it able to make forward
>>> progress?
>>
>> Looks like the freeze depth is messed up with the timeout handler.
>> We shouldn't call nvme_tcp_teardown_io_queues in the timeout handler
>> because it messes with the freeze depth, causing the unfreeze to not
>> wake the waiter (blk_queue_enter). We should simply stop the queue
>> and complete the I/O, and the condition was wrong too, because we
>> need to do it only for the connect command (which cannot reset the
>> timer). So we should check for reserved in the timeout handler.
>>
>> Can you please try this patch?
> Even with this patch I see hangs, as shown below:
While it's omitted from the log you provided, its possible
that we just reset the timer for timed out admin commands which
makes the error recovery flow stuck.
Can you please try this:
--
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 62fbaecdc960..290804d2944f 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -464,6 +464,7 @@ static void nvme_tcp_error_recovery(struct nvme_ctrl
*ctrl)
if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
return;
+ dev_warn(ctrl->device, "starting error recovery\n");
queue_work(nvme_reset_wq, &to_tcp_ctrl(ctrl)->err_work);
}
@@ -2156,33 +2157,41 @@ nvme_tcp_timeout(struct request *rq, bool reserved)
struct nvme_tcp_ctrl *ctrl = req->queue->ctrl;
struct nvme_tcp_cmd_pdu *pdu = req->pdu;
- /*
- * Restart the timer if a controller reset is already scheduled. Any
- * timed out commands would be handled before entering the
connecting
- * state.
- */
- if (ctrl->ctrl.state == NVME_CTRL_RESETTING)
- return BLK_EH_RESET_TIMER;
-
dev_warn(ctrl->ctrl.device,
"queue %d: timeout request %#x type %d\n",
nvme_tcp_queue_id(req->queue), rq->tag, pdu->hdr.type);
- if (ctrl->ctrl.state != NVME_CTRL_LIVE) {
+ switch (ctrl->ctrl.state) {
+ case NVME_CTRL_RESETTING:
+ /*
+ * Restart the timer if a controller reset is already
scheduled.
+ * Any timed out commands would be handled before
entering the
+ * connecting state.
+ */
+ return BLK_EH_RESET_TIMER;
+ case NVME_CTRL_CONNECTING:
+ if (reserved || !nvme_tcp_queue_id(req->queue)) {
+ /*
+ * stop queue immediately and complete the request
+ * if this is a connect sequence because these
+ * requests cannot reset the timer when timed out.
+ */
+ nvme_tcp_stop_queue(&ctrl->ctrl,
nvme_tcp_queue_id(req->queue));
+ nvme_req(rq)->flags |= NVME_REQ_CANCELLED;
+ nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
+ blk_mq_complete_request(rq);
+ return BLK_EH_DONE;
+ }
+ /* fallthru */
+ default:
/*
- * Teardown immediately if controller times out while
starting
- * or we are already started error recovery. all outstanding
- * requests are completed on shutdown, so we return
BLK_EH_DONE.
+ * every other state should trigger the error recovery
+ * which will be handled by the flow and controller state
+ * machine
*/
- flush_work(&ctrl->err_work);
- nvme_tcp_teardown_io_queues(&ctrl->ctrl, false);
- nvme_tcp_teardown_admin_queue(&ctrl->ctrl, false);
- return BLK_EH_DONE;
+ nvme_tcp_error_recovery(&ctrl->ctrl);
}
- dev_warn(ctrl->ctrl.device, "starting error recovery\n");
- nvme_tcp_error_recovery(&ctrl->ctrl);
-
return BLK_EH_RESET_TIMER;
}
--
More information about the Linux-nvme
mailing list