[PATCH v2 7/8] nvme-rdma: fix timeout handler
James Smart
james.smart at broadcom.com
Fri Aug 14 19:27:46 EDT 2020
On 8/6/2020 12:11 PM, Sagi Grimberg wrote:
> Currently we check if the controller state != LIVE, and
> we directly fail the command under the assumption that this
> is the connect command or an admin command within the
> controller initialization sequence.
>
> This is wrong, we need to check if the request risking
> controller setup/teardown blocking if not completed and
> only then fail.
>
> The logic should be:
> - RESETTING, only fail fabrics/admin commands otherwise
> controller teardown will block. otherwise reset the timer
> and come back again.
> - CONNECTING, if this is a connect (or an admin command), we fail
> right away (unblock controller initialization), otherwise we
> treat it like anything else.
> - otherwise trigger error recovery and reset the timer (the
> error handler will take care of completing/delaying it).
>
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
> drivers/nvme/host/rdma.c | 68 +++++++++++++++++++++++++++++-----------
> 1 file changed, 50 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index abc318737f35..30b401fcc06a 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -1185,6 +1185,7 @@ static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl)
> if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING))
> return;
>
> + dev_warn(ctrl->ctrl.device, "starting error recovery\n");
> queue_work(nvme_reset_wq, &ctrl->err_work);
> }
>
> @@ -1951,6 +1952,23 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
> return 0;
> }
>
> +static void nvme_rdma_complete_timed_out(struct request *rq)
> +{
> + struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
> + struct nvme_rdma_queue *queue = req->queue;
> + struct nvme_rdma_ctrl *ctrl = queue->ctrl;
> +
> + /* fence other contexts that may complete the command */
> + mutex_lock(&ctrl->teardown_lock);
> + nvme_rdma_stop_queue(queue);
> + if (blk_mq_request_completed(rq))
> + goto out;
> + nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
> + blk_mq_complete_request(rq);
> +out:
> + mutex_unlock(&ctrl->teardown_lock);
> +}
> +
I believe there should be some comment explaining why it's ok to leave
the rdma queue stopped.
I think it's ok as:
resetting: the controller will be reset, so the queue will be deleted
connecting: init io failures will teardown partially initialized
controller, so the queue will be deleted
> static enum blk_eh_timer_return
> nvme_rdma_timeout(struct request *rq, bool reserved)
> {
> @@ -1961,29 +1979,43 @@ nvme_rdma_timeout(struct request *rq, bool reserved)
> dev_warn(ctrl->ctrl.device, "I/O %d QID %d timeout\n",
> rq->tag, nvme_rdma_queue_idx(queue));
>
> - /*
> - * Restart the timer if a controller reset is already scheduled. Any
> - * timed out commands would be handled before entering the connecting
> - * state.
> - */
> - if (ctrl->ctrl.state == NVME_CTRL_RESETTING)
> + switch (ctrl->ctrl.state) {
> + case NVME_CTRL_RESETTING:
> + if (!nvme_rdma_queue_idx(queue)) {
> + /*
> + * if we are in teardown we must complete immediately
> + * because we may block the teardown sequence (e.g.
> + * nvme_disable_ctrl timed out).
> + */
> + nvme_rdma_complete_timed_out(rq);
> + return BLK_EH_DONE;
> + }
> + /*
> + * Restart the timer if a controller reset is already scheduled.
> + * Any timed out commands would be handled before entering the
> + * connecting state.
> + */
> return BLK_EH_RESET_TIMER;
If you're in RESETTING, why do you need to qualify ios only on the admin
queue. Can't all ios, regardless of queue, just be complete_timed_out()
? Isn't this just a race between the io timeout and the resetting
routine reaching the io ?
> -
> - if (ctrl->ctrl.state != NVME_CTRL_LIVE) {
> + case NVME_CTRL_CONNECTING:
> + if (reserved || !nvme_rdma_queue_idx(queue)) {
> + /*
> + * if we are connecting we must complete immediately
> + * connect (reserved) or admin requests because we may
> + * block controller setup sequence.
> + */
> + nvme_rdma_complete_timed_out(rq);
> + return BLK_EH_DONE;
> + }
This is reasonable. But I'm wondering why this too isn't just
completing any io that timed out. For the non-controller create/init
ios - they'll either bounce back to the multipather or will requeue.
With the requeue, there's an opportunity for Viktor Gladko'vs "reject
I/O to offline device" to bounce it if it's been waiting a while.
> + /* fallthru */
> + default:
> /*
> - * Teardown immediately if controller times out while starting
> - * or we are already started error recovery. all outstanding
> - * requests are completed on shutdown, so we return BLK_EH_DONE.
> + * every other state should trigger the error recovery
> + * which will be handled by the flow and controller state
> + * machine
> */
> - flush_work(&ctrl->err_work);
> - nvme_rdma_teardown_io_queues(ctrl, false);
> - nvme_rdma_teardown_admin_queue(ctrl, false);
> - return BLK_EH_DONE;
> + nvme_rdma_error_recovery(ctrl);
> }
>
> - dev_warn(ctrl->ctrl.device, "starting error recovery\n");
> - nvme_rdma_error_recovery(ctrl);
> -
> return BLK_EH_RESET_TIMER;
> }
>
-- james
More information about the Linux-nvme
mailing list