[PATCH] block: re-introduce blk_mq_complete_request_sync

Chao Leng lengchao at huawei.com
Tue Oct 13 22:41:48 EDT 2020



On 2020/10/14 10:02, Ming Lei wrote:
> On Wed, Oct 14, 2020 at 09:37:07AM +0800, Chao Leng wrote:
>> rdma also need do this patch.
>> We do test this patch with nvme over roce a few days, now work well.
>>
>> On 2020/10/14 9:08, Ming Lei wrote:
>>> On Tue, Oct 13, 2020 at 03:36:08PM -0700, Sagi Grimberg wrote:
>>>>
>>>>>>> This may just reduce the probability. The concurrency of timeout
>>>>>>> and teardown will cause the same request
>>>>>>> be treated repeatly, this is not we expected.
>>>>>>
>>>>>> That is right, not like SCSI, NVME doesn't apply atomic request
>>>>>> completion, so
>>>>>> request may be completed/freed from both timeout & nvme_cancel_request().
>>>>>>
>>>>>> .teardown_lock still may cover the race with Sagi's patch because
>>>>>> teardown
>>>>>> actually cancels requests in sync style.
>>>>> In extreme scenarios, the request may be already retry success(rq state
>>>>> change to inflight).
>>>>> Timeout processing may wrongly stop the queue and abort the request.
>>>>> teardown_lock serialize the process of timeout and teardown, but do not
>>>>> avoid the race.
>>>>> It might not be safe.
>>>>
>>>> Not sure I understand the scenario you are describing.
>>>>
>>>> what do you mean by "In extreme scenarios, the request may be already retry
>>>> success(rq state change to inflight)"?
>>>>
>>>> What will retry the request? only when the host will reconnect
>>>> the request will be retried.
>>>>
>>>> We can call nvme_sync_queues in the last part of the teardown, but
>>>> I still don't understand the race here.
>>>
>>> Not like SCSI, NVME doesn't complete request atomically, so double
>>> completion/free can be done from both timeout & nvme_cancel_request()(via teardown).
>>>
>>> Given request is completed remotely or asynchronously in the two code paths,
>>> the teardown_lock can't protect the case.
>>>
>>> One solution is to apply the introduced blk_mq_complete_request_sync()
>>> in both two code paths.
>>>
>>> Another candidate is to use nvme_sync_queues() before teardown, such as
>>> the following change:
>>>
>>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
>>> index d6a3e1487354..dc3561ca0074 100644
>>> --- a/drivers/nvme/host/tcp.c
>>> +++ b/drivers/nvme/host/tcp.c
>>> @@ -1909,6 +1909,7 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
>>>    	blk_mq_quiesce_queue(ctrl->admin_q);
>>>    	nvme_start_freeze(ctrl);
>>>    	nvme_stop_queues(ctrl);
>>> +	nvme_sync_queues(ctrl);
>> nvme_sync_queues now sync io queues and admin queues, so we may need like this:
>> nvme_sync_io_queues(struct nvme_ctrl *ctrl)
>> {
>> 	down_read(&ctrl->namespaces_rwsem);
>> 	list_for_each_entry(ns, &ctrl->namespaces, list)
>> 		blk_sync_queue(ns->queue);
>> 	up_read(&ctrl->namespaces_rwsem);
>> }
> 
> Looks not necessary to do that, because admin queue is quiesced in
> nvme_tcp_teardown_io_queues(), so it is safe to sync admin queue here too.
sync admin queue is not necessary. It should be done in teardown admin queue.
And rdma do not quiesce the admin queue in tear down io queue.
So separate nvme_sync_io_queues may be a better choice.
> 
> 
> Thanks,
> Ming
> 
> .
> 



More information about the Linux-nvme mailing list