[PATCH v2 0/8] Introduce new max-queue-size configuration

Max Gurtovoy mgurtovoy at nvidia.com
Thu Jan 4 01:25:41 PST 2024


Hi Christoph/Sagi/Keith,
This patch series is mainly for adding an interface for a user to
configure the maximal queue size for fabrics via port configfs. Using
this interface a user will be able to better control the system and HW
resources.

Also, I've increased the maximal queue depth for RDMA controllers to be
256 after request from Guixin Liu. This new value will be valid only for
controllers that don't support PI.

While developing this feature I've made some minor cleanups as well.

Changes from v1:
 - collected Reviewed-by signatures (Sagi and Guixin Liu)
 - removed the patches that unify fabric host and target max/min/default
   queue size definitions (Sagi)
 - align MQES and SQ size according to the NVMe Spec (patch 2/8)

Max Gurtovoy (8):
  nvme-rdma: move NVME_RDMA_IP_PORT from common file
  nvmet: compare mqes and sqsize only for IO SQ
  nvmet: set maxcmd to be per controller
  nvmet: set ctrl pi_support cap before initializing cap reg
  nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
  nvme-rdma: clamp queue size according to ctrl cap
  nvmet: introduce new max queue size configuration entry
  nvmet-rdma: set max_queue_size for RDMA transport

 drivers/nvme/host/rdma.c          | 19 ++++++++++++++-----
 drivers/nvme/target/admin-cmd.c   |  2 +-
 drivers/nvme/target/configfs.c    | 28 ++++++++++++++++++++++++++++
 drivers/nvme/target/core.c        | 18 ++++++++++++++++--
 drivers/nvme/target/discovery.c   |  2 +-
 drivers/nvme/target/fabrics-cmd.c |  5 ++---
 drivers/nvme/target/nvmet.h       |  6 ++++--
 drivers/nvme/target/passthru.c    |  2 +-
 drivers/nvme/target/rdma.c        | 10 ++++++++++
 include/linux/nvme-rdma.h         |  6 +++++-
 include/linux/nvme.h              |  2 --
 11 files changed, 82 insertions(+), 18 deletions(-)

-- 
2.18.1




More information about the Linux-nvme mailing list