[PATCH 2/2] nvmet-rdma: implement get_queue_size controller op

Max Gurtovoy mgurtovoy at nvidia.com
Wed Sep 22 00:44:20 PDT 2021


On 9/22/2021 1:52 AM, Sagi Grimberg wrote:
>
>> Limit the maximal queue size for RDMA controllers. Today, the target
>> reports a limit of 1024 and this limit isn't valid for some of the RDMA
>> based controllers. For now, limit RDMA transport to 128 entries (the
>> default queue depth configured for Linux NVMeoF host fabric drivers).
>> Future general solution should use RDMA/core API to calculate this size
>> according to device capabilities and number of WRs needed per NVMe IO
>> request.
>
> What is preventing you from doing that today? You have the device,
> can't you check attr.max_qp_wr?

max_qp_wr is giving me the maximal amount of wqe's one can issue (the 
minimal unit). In reality we have WRs that constructed from multiple WQEs.

Initially, I wanted to divide max_qp_wr by the maximal WR operation size 
in the low level driver. But that would cause ULPs that don't use this 
maximal WR to suffer.

So for now, as mentioned, till we have some ib_ API, lets set it to 128.




More information about the Linux-nvme mailing list