[PATCH 2/2] nvmet-rdma: implement get_queue_size controller op

Jason Gunthorpe jgg at nvidia.com
Wed Sep 22 05:10:14 PDT 2021


On Wed, Sep 22, 2021 at 12:18:15PM +0300, Sagi Grimberg wrote:

> Can't you do this in rdmw_rw? all of the users of it will need the
> exact same value right?

No, it depends on what ops the user is going to use.
 
> > is it necessary for this submission or can we live with 128 depth for
> > now ? with and without new ib_ API the queue depth will be in these
> > sizes.
> 
> I am not sure I see the entire complexity. Even if this calc is not
> accurate, you are already proposing to hard-code it to 128, so you
> can do this to account for the boundaries there.

As I understood it the 128 is to match what the initiator hardcodes
its limit to - both sides have the same basic problem with allocating
the RDMA QP, they just had different hard coded limits. Due to this we
know that 128 is OK for all RDMA HW as the initiator has proven it
already.

For a stable fix to the interop problem this is a good approach.

If someone wants to add all sorts of complexity to try and figure out
the actual device specific limit then they should probably also show
that there is a performance win (or at least not a loss) to increasing
this number further..

Jason



More information about the Linux-nvme mailing list