[PATCH] block: re-introduce blk_mq_complete_request_sync
Sagi Grimberg
sagi at grimberg.me
Tue Oct 13 18:31:18 EDT 2020
>>>>>> --
>>>>>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
>>>>>> index 629b025685d1..46428ff0b0fc 100644
>>>>>> --- a/drivers/nvme/host/tcp.c
>>>>>> +++ b/drivers/nvme/host/tcp.c
>>>>>> @@ -2175,7 +2175,7 @@ static void nvme_tcp_complete_timed_out(struct request *rq)
>>>>>> /* fence other contexts that may complete the command */
>>>>>> mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
>>>>>> nvme_tcp_stop_queue(ctrl, nvme_tcp_queue_id(req->queue));
>>>>>> - if (!blk_mq_request_completed(rq)) {
>>>>>> + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) {
>>>>>> nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
>>>>>> blk_mq_complete_request_sync(rq);
>>>>>> }
>> This may just reduce the probability. The concurrency of timeout and teardown will cause the same request
>> be treated repeatly, this is not we expected.
>
> That is right, not like SCSI, NVME doesn't apply atomic request completion, so
> request may be completed/freed from both timeout & nvme_cancel_request().
>
> .teardown_lock still may cover the race with Sagi's patch because teardown
> actually cancels requests in sync style.
>
>> In the teardown process, after quiesced queues delete the timer and cancel the timeout work maybe a better option.
>
> Seems better solution, given it is aligned with NVME PCI's reset
> handling. nvme_sync_queues() may be called in nvme_tcp_teardown_io_queues() to
> avoid this race.
We can't call nvme_sync_queues, that flushes the timeout work that is
serializing with the teardown, it will deadlock.
More information about the Linux-nvme
mailing list