[PATCH 2/2] nvmet-rdma: implement get_queue_size controller op

Max Gurtovoy mgurtovoy at nvidia.com
Wed Sep 22 05:57:17 PDT 2021


On 9/22/2021 3:10 PM, Jason Gunthorpe wrote:
> On Wed, Sep 22, 2021 at 12:18:15PM +0300, Sagi Grimberg wrote:
>
>> Can't you do this in rdmw_rw? all of the users of it will need the
>> exact same value right?
> No, it depends on what ops the user is going to use.
>   
>>> is it necessary for this submission or can we live with 128 depth for
>>> now ? with and without new ib_ API the queue depth will be in these
>>> sizes.
>> I am not sure I see the entire complexity. Even if this calc is not
>> accurate, you are already proposing to hard-code it to 128, so you
>> can do this to account for the boundaries there.
> As I understood it the 128 is to match what the initiator hardcodes
> its limit to - both sides have the same basic problem with allocating
> the RDMA QP, they just had different hard coded limits. Due to this we
> know that 128 is OK for all RDMA HW as the initiator has proven it
> already.

Not exactly. The initiator 128 is the default value if not set 
differently in the connect command.

Probably this value can be bigger in initiator since it doesn't perform 
RDMA operation but only sends descriptors to the target.

So we'll need the ib_ future API for initiator as well, but not the RW 
API since the factor for NVMe IO will be 3 (MEM_REG, MEM_INVALID, SEND).


>
> For a stable fix to the interop problem this is a good approach.
>
> If someone wants to add all sorts of complexity to try and figure out
> the actual device specific limit then they should probably also show
> that there is a performance win (or at least not a loss) to increasing
> this number further..

Correct.


> Jason



More information about the Linux-nvme mailing list