[PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport
Max Gurtovoy
mgurtovoy at nvidia.com
Wed Jan 3 14:42:30 PST 2024
On 01/01/2024 11:39, Sagi Grimberg wrote:
>
>> A new port configuration was added to set max_queue_size. Clamp user
>> configuration to RDMA transport limits.
>>
>> Increase the maximal queue size of RDMA controllers from 128 to 256
>> (the default size stays 128 same as before).
>>
>> Reviewed-by: Israel Rukshin <israelr at nvidia.com>
>> Signed-off-by: Max Gurtovoy <mgurtovoy at nvidia.com>
>> ---
>> drivers/nvme/target/rdma.c | 8 ++++++++
>> include/linux/nvme-rdma.h | 3 ++-
>> 2 files changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>> index f298295c0b0f..3a3686efe008 100644
>> --- a/drivers/nvme/target/rdma.c
>> +++ b/drivers/nvme/target/rdma.c
>> @@ -1943,6 +1943,14 @@ static int nvmet_rdma_add_port(struct
>> nvmet_port *nport)
>> nport->inline_data_size = NVMET_RDMA_MAX_INLINE_DATA_SIZE;
>> }
>> + if (nport->max_queue_size < 0) {
>> + nport->max_queue_size = NVME_RDMA_DEFAULT_QUEUE_SIZE;
>> + } else if (nport->max_queue_size > NVME_RDMA_MAX_QUEUE_SIZE) {
>> + pr_warn("max_queue_size %u is too large, reducing to %u\n",
>> + nport->max_queue_size, NVME_RDMA_MAX_QUEUE_SIZE);
>> + nport->max_queue_size = NVME_RDMA_MAX_QUEUE_SIZE;
>> + }
>> +
>
> Not sure its a good idea to tie the host and nvmet default values
> together.
It is already tied for RDMA. I don't see a reason to change it.
I will keep the other default values for fabrics separate, as it is
today, following your review in other commits.
We can discuss it in a dedicated series since it is not related to the
feature we would like to introduce here.
More information about the Linux-nvme
mailing list