[PATCH 3/3] nvme: code command_id with a genctr for use-after-free validation
Sagi Grimberg
sagi at grimberg.me
Mon May 17 15:15:40 PDT 2021
>>>>> Additionally, I do not agree with the statement "we never create such
>>>>> long queues anyways". I have already done this myself.
>>>>
>>>> Why? That won't improve bandwidth, and will increase latency. We already
>>>> have timeout problems with the current default 1k qdepth on some
>>>> devices.
>>>
>>> For testing FPGA or ASIC solutions that support offloading NVMe it is
>>> more convenient to use a single queue pair with a high queue depth than
>>> creating multiple queue pairs that each have a lower queue depth.
>>
>> And you actually see a benefit for using queues that are >=40956 in
>> depth? That is surprising to me...
>
> Hi Sagi,
>
> It seems like there is a misunderstanding. I'm not aware of any use case
> where very high queue depths provide a performance benefit. Such high
> queue depths are necessary to verify an implementation of an NVMe
> controller that maintains state per NVMe command and to verify whether
> the NVMe controller pauses fetching new NVMe commands if the internal
> NVMe command buffer is full.
I see, thanks for the clarification. However I think that a host may
choose not to support a full blown 65535 depth just like devices can
choose not to support it. As for testing your device, this will move
to the same category of all the other things Linux doesn't support,
as needing a different host SW to test this.
More information about the Linux-nvme
mailing list