[PATCH 2/4] nvme-pci: update sqsize when adjusting the queue depth
Keith Busch
kbusch at kernel.org
Tue Jan 3 08:45:48 PST 2023
On Mon, Jan 02, 2023 at 11:39:35AM +0200, Sagi Grimberg wrote:
>
> > > > > But if you want to patches 1+2 to be taken before 3+4, I'd make sqsize
> > > > > to be a 0's based value of q_depth after we subtract one entry for queue
> > > > > wraparound. So others would not get affected from patches 1+2. Then in
> > > > > one batch make all assignments to sqsize 1's based like in patch 3 and
> > > > > change nvme_alloc_io_tagset at the same time.
> > > >
> > > > 1 doe not make any difference in the 0s based vs not values,
> > > > they just reduce the queue size by 1 intentionally.
> > >
> > > Not sure who is they here...
> > >
> > > Anyways, I guess we'll have to wait and see if someone happens to care
> > > if his/hers effective queuing is reduced by 1 all of a sudden...
> >
> > All nvme transports require the host driver leave one queue entry unused.
> > The driver was not spec compliant prior to patches 1 and 2. I'm not sure
> > if that "queue full" definition makes sense for fabrics lacking a shared
> > ring, but the spec says we have to work that way.
>
> fabrics already reserves one entry for a connect cmd, so effectively it
> was reserving a slot already...
nvmf_connect_io_queue() sets the cmd.connect.sqsize to ctrl->sqsize, so
doing a +1 on that value for the tagset is definitely not right. A spec
complaint target should have been halting and setting CSTS.CFS if we
actually used all tags.
More information about the Linux-nvme
mailing list