[PATCH] nvme: fix reconnection fail due to reserved tag allocation

Sagi Grimberg sagi at grimberg.me
Thu Mar 7 02:35:11 PST 2024



On 07/03/2024 12:32, 许春光 wrote:
> Thanks for review, seems that we should revert this patch
> ed01fee283a0, ed01fee283a0 seems just a alone 'optimization'.  If no
> double, I will send another patch.

Not a revert, but a fix with a Fixes tag. Just use 
NVMF_ADMIN_RESERVED_TAGS and NVMF_IO_RESERVED_TAGS.


>
> Thanks
>
> Sagi Grimberg <sagi at grimberg.me> 于2024年3月7日周四 17:36写道:
>>
>>
>> On 28/02/2024 11:14, brookxu.cn wrote:
>>> From: Chunguang Xu <chunguang.xu at shopee.com>
>>>
>>> We found a issue on production environment while using NVMe
>>> over RDMA, admin_q reconnect failed forever while remote
>>> target and network is ok. After dig into it, we found it
>>> may caused by a ABBA deadlock due to tag allocation. In my
>>> case, the tag was hold by a keep alive request waiting
>>> inside admin_q, as we quiesced admin_q while reset ctrl,
>>> so the request maked as idle and will not process before
>>> reset success. As fabric_q shares tagset with admin_q,
>>> while reconnect remote target, we need a tag for connect
>>> command, but the only one reserved tag was held by keep
>>> alive command which waiting inside admin_q. As a result,
>>> we failed to reconnect admin_q forever.
>>>
>>> In order to workaround this issue, I think we should not
>>> retry keep alive request while controller reconnecting,
>>> as we have stopped keep alive while resetting controller,
>>> and will start it again while init finish, so it maybe ok
>>> to drop it.
>> This is the wrong fix.
>> First we should note that this is a regression caused by:
>> ed01fee283a0 ("nvme-fabrics: only reserve a single tag")
>>
>> Then, you need to restore reserving two tags for the admin
>> tagset.




More information about the Linux-nvme mailing list