[PATCH 2/4] nvme-pci: update sqsize when adjusting the queue depth
Keith Busch
kbusch at kernel.org
Thu Dec 29 08:34:23 PST 2022
On Thu, Dec 29, 2022 at 02:07:04PM +0200, Sagi Grimberg wrote:
>
> > > But if you want to patches 1+2 to be taken before 3+4, I'd make sqsize
> > > to be a 0's based value of q_depth after we subtract one entry for queue
> > > wraparound. So others would not get affected from patches 1+2. Then in
> > > one batch make all assignments to sqsize 1's based like in patch 3 and
> > > change nvme_alloc_io_tagset at the same time.
> >
> > 1 doe not make any difference in the 0s based vs not values,
> > they just reduce the queue size by 1 intentionally.
>
> Not sure who is they here...
>
> Anyways, I guess we'll have to wait and see if someone happens to care
> if his/hers effective queuing is reduced by 1 all of a sudden...
All nvme transports require the host driver leave one queue entry unused.
The driver was not spec compliant prior to patches 1 and 2. I'm not sure
if that "queue full" definition makes sense for fabrics lacking a shared
ring, but the spec says we have to work that way.
More information about the Linux-nvme
mailing list