[PATCH v3] nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays
John Meneghini
jmeneghi at redhat.com
Wed Aug 27 14:51:48 PDT 2025
I'm sorry but Red Hat will not approve any upstream change like this that modifies the policy for OTHER VENDORS stuff.
You can't simply change the IO policy for all of these arrays. Many vendors have no autoconnect/udev-rules because they don't want one. They want to use the default ctrl_loss_tmo and the default iopolicy (numa)... you can't just change this for them.
If you want people to migrate their udev rules out of separate files and into a single autoconnect file like this then you'll have to get them to agree.
When I look upstream I see exactly 3 vendors who have a udev-rule for their iopolicy.
nvme-cli(master) > ls -1 nvmf-autoconnect/udev-rules/71*
nvmf-autoconnect/udev-rules/71-nvmf-hpe.rules.in
nvmf-autoconnect/udev-rules/71-nvmf-netapp.rules.in
nvmf-autoconnect/udev-rules/71-nvmf-vastdata.rules.in
I suggest that you get these three vendors to agree to move their policy into a single 71-nvmf-mulitpath-policy.rules.in file, and then leave everyone else's stuff alone.
In the future, vendors who want to add a multipath-policy rule can then use the new file instead of adding their own.
/John
On 8/20/25 5:32 PM, Xose Vazquez Perez wrote:
> One file per vendor, or device, is a bit excessive for two-four rules.
>
>
> If possible, select round-robin (>=5.1), or queue-depth (>=6.11).
> round-robin is a basic selector, and only works well under ideal conditions.
>
> A nvme benchmark, round-robin vs queue-depth, shows how bad it is:
> https://marc.info/?l=linux-kernel&m=171931850925572
> https://marc.info/?l=linux-kernel&m=171931852025575
> https://github.com/johnmeneghini/iopolicy/?tab=readme-ov-file#sample-data
> https://people.redhat.com/jmeneghi/ALPSS_2023/NVMe_QD_Multipathing.pdf
>
>
> [ctrl_loss_tmo default value is 600 (ten minutes)]
You can't remove this because vendors have ctrl_loss_tmo set to -1 on purpose.
> v3:
> - add Fujitsu/ETERNUS AB/HB
> - add Hitachi/VSP
>
> v2:
> - fix ctrl_loss_tmo commnent
> - add Infinidat/InfiniBox
>
>
> Cc: Wayne Berthiaume <Wayne.Berthiaume at dell.com>
> Cc: Vasuki Manikarnike <vasuki.manikarnike at hpe.com>
> Cc: Matthias Rudolph <Matthias.Rudolph at hitachivantara.com>
> Cc: Martin George <marting at netapp.com>
> Cc: NetApp RDAC team <ng-eseries-upstream-maintainers at netapp.com>
> Cc: Zou Ming <zouming.zouming at huawei.com>
> Cc: Li Xiaokeng <lixiaokeng at huawei.com>
> Cc: Randy Jennings <randyj at purestorage.com>
> Cc: Jyoti Rani <jrani at purestorage.com>
> Cc: Brian Bunker <brian at purestorage.com>
> Cc: Uday Shankar <ushankar at purestorage.com>
> Cc: Chaitanya Kulkarni <kch at nvidia.com>
> Cc: Sagi Grimberg <sagi at grimberg.me>
> Cc: Keith Busch <kbusch at kernel.org>
> Cc: Christoph Hellwig <hch at lst.de>
> Cc: Marco Patalano <mpatalan at redhat.com>
> Cc: Ewan D. Milne <emilne at redhat.com>
> Cc: John Meneghini <jmeneghi at redhat.com>
> Cc: Daniel Wagner <dwagner at suse.de>
> Cc: Daniel Wagner <wagi at monom.org>
> Cc: Hannes Reinecke <hare at suse.de>
> Cc: Martin Wilck <mwilck at suse.com>
> Cc: Benjamin Marzinski <bmarzins at redhat.com>
> Cc: Christophe Varoqui <christophe.varoqui at opensvc.com>
> Cc: BLOCK-ML <linux-block at vger.kernel.org>
> Cc: NVME-ML <linux-nvme at lists.infradead.org>
> Cc: SCSI-ML <linux-scsi at vger.kernel.org>
> Cc: DM_DEVEL-ML <dm-devel at lists.linux.dev>
> Signed-off-by: Xose Vazquez Perez <xose.vazquez at gmail.com>
> ---
>
> This will be the last iteration of this patch, there are no more NVMe storage
> array manufacturers.
>
>
> Maybe these rules should be merged into this new file. ???
> 71-nvmf-hpe.rules.in
> 71-nvmf-netapp.rules.in
> 71-nvmf-vastdata.rules.in
>
> ---
> .../80-nvmf-storage_arrays.rules.in | 48 +++++++++++++++++++
> 1 file changed, 48 insertions(+)
> create mode 100644 nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
>
> diff --git a/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in b/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
> new file mode 100644
> index 00000000..ac5df797
> --- /dev/null
> +++ b/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
> @@ -0,0 +1,48 @@
> +##### Storage arrays
> +
> +#### Set iopolicy for NVMe-oF
> +### iopolicy: numa (default), round-robin (>=5.1), or queue-depth (>=6.11)
> +
> +## Dell EMC
> +# PowerMax
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="EMC PowerMax"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="EMC PowerMax"
> +# PowerStore
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="dellemc-powerstore"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="dellemc-powerstore"
> +
> +## Fujitsu
> +# ETERNUS AB/HB
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Fujitsu ETERNUS AB/HB Series"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Fujitsu ETERNUS AB/HB Series"
> +
> +## Hitachi Vantara
> +# VSP
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="HITACHI SVOS-RF-System"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="HITACHI SVOS-RF-System"
> +
> +## Huawei
> +# OceanStor
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Huawei-XSG1"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Huawei-XSG1"
> +
> +## IBM
> +# FlashSystem (RamSan)
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="FlashSystem"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="FlashSystem"
> +# FlashSystem (Storwize/SVC)
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="IBM*214"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="IBM*214"
> +
> +## Infinidat
> +# InfiniBox
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="InfiniBox"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="InfiniBox"
> +
> +## Pure
> +# FlashArray
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Pure Storage FlashArray"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Pure Storage FlashArray"
> +
> +
> +##### EOF
More information about the Linux-nvme
mailing list