[nvme(t)-rdma Question] Understanding nvmet_port's max_queue_size
Jigao Luo
jigao.luo at outlook.com
Mon Sep 16 07:35:31 PDT 2024
Hi nvme-rdma experts,
I’m trying to understand the queue size (max_queue_size) for NVMe-oF
over RDMA. I came across the max_queue_size field in the struct
nvmet_port, which appears to be capped at the max value:
NVME_RDMA_MAX_QUEUE_SIZE.
First Question:
I would like to confirm my understanding of this setting: The
max_queue_size is configured on the NVMe-oF target when setting up ports
for NVMe subsystems. Once set, if an NVMe-oF host connects and runs FIO
on the nvmeof device, my assumption is that an iodepth specified in FIO
that is larger than the target’s max_queue_size will not be fully
utilized. Is this assumption correct? Alternatively, could having an FIO
iodepth larger than the target’s max_queue_size negatively impact
performance due to this mismatch in queue depths?
Second Question:
I noticed that this patch set enables max_queue_size in the
configuration entry and increases the max value from 128 to 256:
https://lists.infradead.org/pipermail/linux-nvme/2024-January/044145.html
My current kernel and driver versions are earlier than this patch (mine
is Linux 6.6.1). Is there any way for old kernel to view the current
max_queue_size value? I’ve checked via dmesg and
/sys/kernel/config/nvmet/ports/, but I don’t see any reference to
max_queue_size.
Thank you for your assistance and insights!
Best regards,
Jigao
More information about the Linux-nvme
mailing list