[PATCH 2/4] nvme-pci: update sqsize when adjusting the queue depth

Sagi Grimberg sagi at grimberg.me
Mon Jan 2 01:39:35 PST 2023


>>>> But if you want to patches 1+2 to be taken before 3+4, I'd make sqsize
>>>> to be a 0's based value of q_depth after we subtract one entry for queue
>>>> wraparound. So others would not get affected from patches 1+2. Then in
>>>> one batch make all assignments to sqsize 1's based like in patch 3 and
>>>> change nvme_alloc_io_tagset at the same time.
>>>
>>> 1 doe not make any difference in the 0s based vs not values,
>>> they just reduce the queue size by 1 intentionally.
>>
>> Not sure who is they here...
>>
>> Anyways, I guess we'll have to wait and see if someone happens to care
>> if his/hers effective queuing is reduced by 1 all of a sudden...
> 
> All nvme transports require the host driver leave one queue entry unused.
> The driver was not spec compliant prior to patches 1 and 2. I'm not sure
> if that "queue full" definition makes sense for fabrics lacking a shared
> ring, but the spec says we have to work that way.

fabrics already reserves one entry for a connect cmd, so effectively it
was reserving a slot already...



More information about the Linux-nvme mailing list