Number of data and admin queues in use
Thomas Glanzmann
thomas at glanzmann.de
Tue Jul 15 09:38:47 PDT 2025
Hello Keith,
* Keith Busch <kbusch at kernel.org> [2025-07-15 16:39]:
> For PCI, the driver automatically handles the queue and interrupt setup,
> and cpu assignment.
> For TCP (and all fabrics transports), you have to specificy how many
> connections you want to make ("nr_io_queues=X") when you're setting up
> your initial fabrics connection.
> If you want to see what you've ended up with, you can consult the
> namespaces' sysfs entries:
> How many IO queues are there:
> # ls -1 /sys/block/nvme0n1/mq/ | wc -l
> 64
> How large is each IO queue:
> # cat /sys/block/nvme0n1/queue/nr_requests
> 1023
thank you for taking the time to answer me. I was looking multiple years for an
answer to this. nr_requests I stumbled on before, but /sys/block/nvme0n1/mq/
was new to me. The maximum that the NetApp appears to support is:
na2501::*> vserver nvme show-host-priority
Node Protocol Priority I/O Queue Count I/O Queue Depth
--------------------- --------- -------- --------------- ---------------
na2501-01 fc-nvme
regular 4 32
high 6 32
nvme-tcp
regular 2 128
high 4 128
na2501-02 fc-nvme
regular 4 32
high 6 32
nvme-tcp
regular 2 128
high 4 128
(live) [~] ls -1 /sys/block/nvme0c0n1/mq/ | wc -l
4
(live) [~] ls -1 /sys/block/nvme0c1n1/mq/ | wc -l
4
(live) [~] cat /sys/block/nvme0c0n1/queue/nr_requests
127
With ext4 and fio I get:
fio --ioengine=libaio --refill_buffers --filesize=4G --ramp_time=2s --numjobs=40 --direct=1 --verify=0 --randrepeat=0 --group_reporting --directory /mnt --name=4khqd --blocksize=4k --iodepth=50 --readwrite=write
write: IOPS=159k, BW=620MiB/s (651MB/s)(159GiB/261872msec); 0 zone resets
fio --ioengine=libaio --refill_buffers --filesize=4G --ramp_time=2s --numjobs=40 --direct=1 --verify=0 --randrepeat=0 --group_reporting --directory /mnt --name=4khqd --blocksize=4k --iodepth=50 --readwrite=read
read: IOPS=449k, BW=1752MiB/s (1838MB/s)(157GiB/91645msec)
fio --ioengine=libaio --refill_buffers --filesize=4G --ramp_time=2s --numjobs=40 --direct=1 --verify=0 --randrepeat=0 --group_reporting --directory /mnt --name=1mhqd --blocksize=1m --iodepth=50 --readwrite=write
write: IOPS=1965, BW=1970MiB/s (2066MB/s)(157GiB/81434msec); 0 zone resets
fio --ioengine=libaio --refill_buffers --filesize=4G --ramp_time=2s --numjobs=40 --direct=1 --verify=0 --randrepeat=0 --group_reporting --directory /mnt --name=1mhqd --blocksize=1m --iodepth=50 --readwrite=read
read: IOPS=4034, BW=4044MiB/s (4241MB/s)(153GiB/38682msec)
Using 'iostat -xm 2' I can see that is utilizes the queue depth by watching aqu-sz.
Cheers,
Thomas
More information about the Linux-nvme
mailing list