[PATCH] nvme-core: fix deadlock when reconnect failed due to nvme_set_queue_count timeout

Chao Leng lengchao at huawei.com
Wed Aug 5 02:33:35 EDT 2020


A deadlock happens When we test nvme over roce with link blink. The
reason: link blink will cause error recovery, and then reconnect.If
reconnect fail due to nvme_set_queue_count timeout, the reconnect
process will set the queue count as 0 and continue , and then
nvme_start_ctrl will call nvme_enable_aen, and deadlock happens
because the admin queue is quiesced.

log:
Aug  3 22:47:24 localhost kernel: nvme nvme2: I/O 22 QID 0 timeout
Aug  3 22:47:24 localhost kernel: nvme nvme2: Could not set queue count
(881)
stack:
root     23848  0.0  0.0      0     0 ?        D    Aug03   0:00
[kworker/u12:4+nvme-wq]
[<0>] blk_execute_rq+0x69/0xa0
[<0>] __nvme_submit_sync_cmd+0xaf/0x1b0 [nvme_core]
[<0>] nvme_features+0x73/0xb0 [nvme_core]
[<0>] nvme_start_ctrl+0xa4/0x100 [nvme_core]
[<0>] nvme_rdma_setup_ctrl+0x438/0x700 [nvme_rdma]
[<0>] nvme_rdma_reconnect_ctrl_work+0x22/0x30 [nvme_rdma]
[<0>] process_one_work+0x1a7/0x370
[<0>] worker_thread+0x30/0x380
[<0>] kthread+0x112/0x130
[<0>] ret_from_fork+0x35/0x40

Many functions which call __nvme_submit_sync_cmd treat error code in two
modes: If error code less than 0, treat as command failed. If erroe code
more than 0, treat as target not support or other. So cancel io with
error code: NVME_SC_HOST_PATH_ERROR or NVME_SC_HOST_ABORTED_CMD, we need
set the flag:NVME_REQ_CANCELLED. Thus __nvme_submit_sync_cmd will return
error(less than 0), nvme_set_queue_count will return error, reconnect
progress will fail instead of continue.

Signed-off-by: Chao Leng <lengchao at huawei.com>
---
 drivers/nvme/host/core.c    | 1 +
 drivers/nvme/host/fabrics.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index c2c5bc4fb702..865645577f2c 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -303,6 +303,7 @@ bool nvme_cancel_request(struct request *req, void *data, bool reserved)
 	if (blk_mq_request_completed(req))
 		return true;
 
+	nvme_req(req)->flags |= NVME_REQ_CANCELLED;
 	nvme_req(req)->status = NVME_SC_HOST_ABORTED_CMD;
 	blk_mq_force_complete_rq(req);
 	return true;
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 2a6c8190eeb7..4e745603a3af 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -552,6 +552,7 @@ blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl,
 	    !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
 		return BLK_STS_RESOURCE;
 
+	nvme_req(rq)->flags |= NVME_REQ_CANCELLED;
 	nvme_req(rq)->status = NVME_SC_HOST_PATH_ERROR;
 	blk_mq_start_request(rq);
 	nvme_complete_rq(rq);
-- 
2.16.4




More information about the Linux-nvme mailing list