[PATCH] nvme-rdma: fix crash for no IO queues

Chao Leng lengchao at huawei.com
Wed Mar 3 02:27:01 GMT 2021



On 2021/3/3 2:24, Keith Busch wrote:
> On Tue, Mar 02, 2021 at 05:49:05PM +0800, Chao Leng wrote:
>>
>>
>> On 2021/3/2 15:48, Hannes Reinecke wrote:
>>> On 2/27/21 10:30 AM, Chao Leng wrote:
>>>>
>>>>
>>>> On 2021/2/27 17:12, Hannes Reinecke wrote:
>>>>> On 2/24/21 6:59 AM, Chao Leng wrote:
>>>>>>
>>>>>>
>>>>>> On 2021/2/24 7:21, Keith Busch wrote:
>>>>>>> On Tue, Feb 23, 2021 at 03:26:02PM +0800, Chao Leng wrote:
>>>>>>>> A crash happens when set feature(NVME_FEAT_NUM_QUEUES) timeout in nvme
>>>>>>>> over rdma(roce) reconnection, the reason is use the queue which is not
>>>>>>>> alloced.
>>>>>>>>
>>>>>>>> If it is not discovery and no io queues, the connection should fail.
>>>>>>>
>>>>>>> If you're getting a timeout, we need to quit initialization. Hannes
>>>>>>> attempted making that status visible for fabrics here:
>>>>>>>
>>>>>>> http://lists.infradead.org/pipermail/linux-nvme/2021-January/022353.html
>>>>>>>
>>>>>> I know the patch. It can not solve the scenario: target may be an
>>>>>> attacker or the target behavior is incorrect.
>>>>>> If target return 0 io queues or return other error code, the crash will
>>>>>> still happen. We should not allow this to happen.
>>>>> I'm fully with you that we shouldn't crash, but at the same time a
>>>>> value of '0' for the number of I/O queues is considered valid.
>>>>> So we should fix the code to handle this scenario, and not disallowing
>>>>> zero I/O queues.
>>>> '0' I/O queues doesn't make any sense to nvme over fabrics, it is
>>>> different with nvme over pci. If there is some bug with target, we can
>>>> debug it in target instead of use admin queue in host.
>>>> target may be an attacker or the target behavior is incorrect. So we
>>>> should avoid crash. Another option: prohibit  request delivery if
>>>> io queue do not created.
>>>> I think failed connection with '0' I/O queues is a better choice.
>>>
>>> Might be, but that's not for me to decide.
>>> I tried that initially, but that patch got rejected as _technically_ the
>>> controller is reachable via its admin queue.
>> I know about your patch. That patch failed connection for all transports.
>> It is not good for pcie transport, the controller can accept admin
>> commands to get some diagnostics (perhaps an error log page), this is
>> keith's thoughts.
> 
> We can continue to administrate a controller that didn't create IO
> queues, but the controller must provide a response to all commands. If
> it doesn't, the controller will either be reset or abandoned. This
> should be the same behavior for any transport, though; there's nothing
> special about PCIe for that.
Though I don't see any useful scenarios for nvme over fabric now,
Reserved for future possibilities may be a better choice.
I will send another patch. Prohibit request delivery if the queue is not
live.

> .
> 



More information about the Linux-nvme mailing list