[PATCH v2 7/8] nvme-rdma: fix timeout handler
Sagi Grimberg
sagi at grimberg.me
Tue Aug 18 20:38:32 EDT 2020
>> +static void nvme_rdma_complete_timed_out(struct request *rq)
>> +{
>> + struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
>> + struct nvme_rdma_queue *queue = req->queue;
>> + struct nvme_rdma_ctrl *ctrl = queue->ctrl;
>> +
>> + /* fence other contexts that may complete the command */
>> + mutex_lock(&ctrl->teardown_lock);
>> + nvme_rdma_stop_queue(queue);
>> + if (blk_mq_request_completed(rq))
>> + goto out;
>> + nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
>> + blk_mq_complete_request(rq);
>> +out:
>> + mutex_unlock(&ctrl->teardown_lock);
>> +}
>> +
>
> I believe there should be some comment explaining why it's ok to leave
> the rdma queue stopped.
> I think it's ok as:
> resetting: the controller will be reset, so the queue will be deleted
> connecting: init io failures will teardown partially initialized
> controller, so the queue will be deleted
I can add this comment.
>
>> static enum blk_eh_timer_return
>> nvme_rdma_timeout(struct request *rq, bool reserved)
>> {
>> @@ -1961,29 +1979,43 @@ nvme_rdma_timeout(struct request *rq, bool
>> reserved)
>> dev_warn(ctrl->ctrl.device, "I/O %d QID %d timeout\n",
>> rq->tag, nvme_rdma_queue_idx(queue));
>> - /*
>> - * Restart the timer if a controller reset is already scheduled. Any
>> - * timed out commands would be handled before entering the
>> connecting
>> - * state.
>> - */
>> - if (ctrl->ctrl.state == NVME_CTRL_RESETTING)
>> + switch (ctrl->ctrl.state) {
>> + case NVME_CTRL_RESETTING:
>> + if (!nvme_rdma_queue_idx(queue)) {
>> + /*
>> + * if we are in teardown we must complete immediately
>> + * because we may block the teardown sequence (e.g.
>> + * nvme_disable_ctrl timed out).
>> + */
>> + nvme_rdma_complete_timed_out(rq);
>> + return BLK_EH_DONE;
>> + }
>> + /*
>> + * Restart the timer if a controller reset is already scheduled.
>> + * Any timed out commands would be handled before entering the
>> + * connecting state.
>> + */
>> return BLK_EH_RESET_TIMER;
>
> If you're in RESETTING, why do you need to qualify ios only on the admin
> queue. Can't all ios, regardless of queue, just be complete_timed_out()
> ? Isn't this just a race between the io timeout and the resetting
> routine reaching the io ?
You are correct, given that we are serialized against the reset/error
recovery we can just do the same for both. The request is going to
be cancelled anyways.
>
>
>> -
>> - if (ctrl->ctrl.state != NVME_CTRL_LIVE) {
>> + case NVME_CTRL_CONNECTING:
>> + if (reserved || !nvme_rdma_queue_idx(queue)) {
>> + /*
>> + * if we are connecting we must complete immediately
>> + * connect (reserved) or admin requests because we may
>> + * block controller setup sequence.
>> + */
>> + nvme_rdma_complete_timed_out(rq);
>> + return BLK_EH_DONE;
>> + }
>
> This is reasonable. But I'm wondering why this too isn't just
> completing any io that timed out. For the non-controller create/init
> ios - they'll either bounce back to the multipather or will requeue.
> With the requeue, there's an opportunity for Viktor Gladko'vs "reject
> I/O to offline device" to bounce it if it's been waiting a while.
You are right, I can do that to any state that is not LIVE.
Thanks for the review!
More information about the Linux-nvme
mailing list