Increase maxsize of io_queue_depth for nvme driver?
Apachez
apachez at gmail.com
Sun Sep 14 05:45:35 PDT 2025
Hi,
According to current version of the nvme driver in Linux Kernel
6.17-rc5 the boundaries for io_queue_depth are set to >= 2 and <=
4095:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/nvme/host/pci.c?h=v6.17-rc5#n93
However according to
https://nvmexpress.org/wp-content/uploads/NVMe-NVM-Express-2.0a-2021.07.26-Ratified.pdf
the Maximum Queue Entries Supported (MQES) is a 15 bit value (so 32767
should be the largest possible) while other "internet sources" often
referers max queue of a NVMe to be up to 64k (65535).
Using nvme-cli on a Micron 7450 NVMe SSD 800GB I get this result:
# nvme show-regs -H /dev/nvme0 | grep -i 'Maximum Queue Entries Supported'
Maximum Queue Entries Supported (MQES): 8192
I would like to propose that NVME_PCI_MAX_QUEUE_SIZE should be
increased from 4095 to 32767 to match the current NVMe specification
regarding MQES and to give the sysop ability to fully utilize the
performance of the hardware being used, or am I missing something
here?
A spinoff of this would also be that the default of
io_queue_depth=1000 should first attempt to use the reported MQES
value by the device and if that doesnt exist the io_queue_depth should
be set to current default of 1000?
Kind Regards
Apachez
More information about the Linux-nvme
mailing list