[PATCH V2 3/4] nvme: tcp: complete non-IO requests atomically
Chao Leng
lengchao at huawei.com
Tue Oct 20 23:14:37 EDT 2020
On 2020/10/21 10:55, Ming Lei wrote:
> On Wed, Oct 21, 2020 at 10:20:11AM +0800, Chao Leng wrote:
>>
>>
>> On 2020/10/21 9:22, Ming Lei wrote:
>>> On Tue, Oct 20, 2020 at 05:04:29PM +0800, Chao Leng wrote:
>>>>
>>>>
>>>> On 2020/10/20 16:53, Ming Lei wrote:
>>>>> During controller's CONNECTING state, admin/fabric/connect requests
>>>>> are submitted for recovery controller, and we allow to abort this request
>>>>> directly in time out handler for not blocking setup procedure.
>>>>>
>>>>> So timout vs. normal completion race exists on these requests since
>>>>> admin/fabirc/connect queues won't be shutdown before handling timeout
>>>>> during CONNECTING state.
>>>>>
>>>>> Add atomic completion for requests from connect/fabric/admin queue for
>>>>> avoiding the race.
>>>>>
>>>>> CC: Chao Leng <lengchao at huawei.com>
>>>>> Cc: Sagi Grimberg <sagi at grimberg.me>
>>>>> Reported-by: Yi Zhang <yi.zhang at redhat.com>
>>>>> Tested-by: Yi Zhang <yi.zhang at redhat.com>
>>>>> Signed-off-by: Ming Lei <ming.lei at redhat.com>
>>>>> ---
>>>>> drivers/nvme/host/tcp.c | 40 +++++++++++++++++++++++++++++++++++++---
>>>>> 1 file changed, 37 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
>>>>> index d6a3e1487354..7e85bd4a8d1b 100644
>>>>> --- a/drivers/nvme/host/tcp.c
>>>>> +++ b/drivers/nvme/host/tcp.c
>>>>> @@ -30,6 +30,8 @@ static int so_priority;
>>>>> module_param(so_priority, int, 0644);
>>>>> MODULE_PARM_DESC(so_priority, "nvme tcp socket optimize priority");
>>>>> +#define REQ_STATE_COMPLETE 0
>>>>> +
>>>>> enum nvme_tcp_send_state {
>>>>> NVME_TCP_SEND_CMD_PDU = 0,
>>>>> NVME_TCP_SEND_H2C_PDU,
>>>>> @@ -56,6 +58,8 @@ struct nvme_tcp_request {
>>>>> size_t offset;
>>>>> size_t data_sent;
>>>>> enum nvme_tcp_send_state state;
>>>>> +
>>>>> + unsigned long comp_state;
>>>> I do not think adding state is a good idea.
>>>> It is similar to rq->state.
>>>> In the teardown process, after quiesced queues delete the timer and
>>>> cancel the timeout work maybe a better option.
>>>> I will send the patch later.
>>>> The patch is already tested with roce more than one week.
>>>
>>> Actually there isn't race between timeout and teardown, and patch 1 and patch
>>> 2 are enough to fix the issue reported by Yi.
>>>
>>> It is just that rq->state is updated to IDLE in its. complete(), so
>>> either one of code paths may think that this rq isn't completed, and
>>> patch 2 has addressed this issue.
>>>
>>> In short, teardown lock is enough to cover the race.
>> The race may cause abnormals:
>> 1. Reported by Yi Zhang <yi.zhang at redhat.com>
>> detail: https://lore.kernel.org/linux-nvme/1934331639.3314730.1602152202454.JavaMail.zimbra@redhat.com/
>> 2. BUG_ON in blk_mq_requeue_request
>> Because error recovery and time out may repeated completion request.
>> First error recovery cancel request in tear down process, the request
>> will be retried in completion, rq->state will be changed to IDEL.
>
> Right.
>
>> And then time out will complete the request again, and samely retry
>> the request, BUG_ON will happen in blk_mq_requeue_request.
>
> With patch2 in this patchset, timeout handler won't complete the request any
> more.
>
>> 3. abnormal link disconnection
>> Firt error recovery cancel all request, reconnect success, the request
>> will be restarted. And then time out will complete the request again,
>> the queue will be stoped in nvme_rdma(tcp)_complete_timed_out,
>> Abnormal link diconnection will happen. The condition(time out process
>> is delayed long time by some reason such as hardware interrupt) is need.
>> So the probability is low.
>
> OK, the timeout handler may just get chance to run after recovery is
> done, and it can be fixed by calling nvme_sync_queues() after
> updating to CONNECTING or before updating to LIVE together with patch 1 & 2.
>
>> teardown_lock just serialize the race. and add checkint the rq->state can avoid
>> the 1 and 2 scenario, but 3 scenario can not be fixed.
>
> I didn't think of scenario 3, which seems not triggered in Yi's test.
The scenario 3 is unlikely triggered in normal test.
The trigger condition are harsh. It'll only happen in some extreme situations.
If without scenario 3, Sagi's patch can work well.
>
>
> thanks,
> Ming
>
> .
>
More information about the Linux-nvme
mailing list