Number of data and admin queues in use
Thomas Glanzmann
thomas at glanzmann.de
Tue Jul 15 11:05:21 PDT 2025
Hello Chaitanya,
> From what I can see you are getting number of queues for both tcp and
> pcie NVMe controller, what is your question?
My question was how to see the number and size of NVMe IO queues but
Keith already answered that. I just thanked him and added some stats
from the NetApp.
> Another way to dig into controller side fields or queue depth you can
> read the CAP space see this from spec
> Figure 36: Offset 0h: CAP – Controller Capabilities :-
> "Maximum Queue Entries Supported (MQES): This field indicates the
> maximum individual queue size that the controller supports. For NVMe
> over PCIe implementations, this value applies to the I/O Submission
> Queues and I/O Completion Queues that the host creates. For NVMe over
> Fabrics implementations, this value applies to only the I/O Submission
> Queues that the host creates. This is a 0’s based value. The minimum
> value is 1h, indicating two entries."
> -ck
> [1]
> For fabrics transport (TCP) number are queues are calculated using
> nvmf_nr_io_queue() to make sure we don't create more read/defult
> queues than CPUs available same check is also applicable for write
> and poll queues.
> nvme_set_queue_count adjusts the queue count based on controller
> capabilities which cal also clamp the queue count.
> nvmf_set_io_queues() set queue count for each queue type read,
> default, poll. then nvmf_map_queues() maps them into blk-mq
> structure so that default/read/poll and each gets attached to
> blk_mq context.
> On My machine I've 48 CPUs so when I create tcp target I get :-
> [ 1196.058440] nvme nvme1: creating 48 I/O queues.
> [ 1196.062370] nvme nvme1: mapped 48/0/0 default/read/poll queues.
> you should be able to see this into debug messages that is coming
> from queue allocation helpers respectively that also has controller
> device name "nvme1" :-
> nvme_tcp_alloc_io_queues()
> nvmf_map_queues()
Tomorrow I'll setup a NVMe/TCP target on Linux and do some benchmarking. I'll
also hookup the NetApp to 64 Gbit/s FC and do some benchmarking with FC and
FC/NVMe.
Thank you for the additional insight, I never paid attention to this but, I did
now:
[ 3730.402432] nvme nvme0: queue_size 128 > ctrl sqsize 32, clamping down
[ 3795.115084] subsysnqn nqn.1992-08.com.netapp:sn.e0a0273a60b711f09deed039ead647e8:subsystem.svm1_subsystem_553 iopolicy changed from numa to queue-depth
[ 3795.154560] nvme nvme0: creating 2 I/O queues.
[ 3795.156535] nvme nvme0: mapped 2/0/0 default/read/poll queues.
[ 3801.004641] nvme nvme1: creating 2 I/O queues.
[ 3801.006541] nvme nvme1: mapped 2/0/0 default/read/poll queues.
Than I bumped the queues and queue size on the NetApp and got:
[98114.846603] nvme nvme0: queue_size 128 > ctrl sqsize 32, clamping down
[98727.596158] subsysnqn nqn.1992-08.com.netapp:sn.e0a0273a60b711f09deed039ead647e8:subsystem.svm1_subsystem_553 iopolicy changed from numa to queue-depth
[98727.635617] nvme nvme0: creating 4 I/O queues.
[98727.638218] nvme nvme0: mapped 4/0/0 default/read/poll queues.
[98741.459565] nvme nvme1: creating 4 I/O queues.
[98741.462227] nvme nvme1: mapped 4/0/0 default/read/poll queues.
Cheers,
Thomas
More information about the Linux-nvme
mailing list