[PATCH v1 0/2] update RDMA controllers queue depth

Mark Ruijter mruijter at primelogic.nl
Tue Sep 21 12:42:53 PDT 2021


Hi Chaitanya,

I’ll try to run some tests asap and let you know.

—Mark

> Op 21 sep. 2021 om 21:22 heeft Chaitanya Kulkarni <chaitanyak at nvidia.com> het volgende geschreven:
> 
> Mark,
> 
>> On 9/21/2021 12:04 PM, Max Gurtovoy wrote:
>> Hi all,
>> This series is solving the issue that was introduced by Mark Ruijter
>> while testing SPDK initiators on Vmware-7.x while connecting to Linux
>> RDMA target running on NVIDIA's ConnectX-6 Mellanox Technologies
>> adapter. During connection establishment, the NVMf target controller
>> expose a 1024 queue depth capability but wasn't able to satisfy this
>> depth in reality. The reason for that is that the NVMf driver didn't
>> take the underlying HW capabilities into consideration. For now, limit
>> the RDMA queue depth to a value of 128 (that is the default and works
>> for all the RDMA controllers probably). For that, introduce a new
>> controller operation to return the possible queue size for a given HW.
>> Other transport will continue with thier old behaviour.
>> 
>> In the future, in order to increase this size, we'll need to create a
>> special RDMA API to calculate a possible queue depth for ULPs. As we
>> know, the RDMA IO operations sometimes are built from multiple WRs (such
>> as memory registrations and invalidations) that the ULP driver should
>> take this into consideration during device discovery and queue depth
>> calculations.
>> 
>> Max Gurtovoy (2):
>>   nvmet: add get_queue_size op for controllers
>>   nvmet-rdma: implement get_queue_size controller op
>> 
>>  drivers/nvme/target/core.c  | 8 +++++---
>>  drivers/nvme/target/nvmet.h | 1 +
>>  drivers/nvme/target/rdma.c  | 8 ++++++++
>>  3 files changed, 14 insertions(+), 3 deletions(-)
>> 
> 
> It will be great if you can provide tested by tag.
> 


More information about the Linux-nvme mailing list