[PATCH RFC 0/1] Add visibility for native NVMe miltipath using debugfs

Daniel Wagner dwagner at suse.de
Mon Jul 22 07:18:31 PDT 2024


On Mon, Jul 22, 2024 at 03:01:08PM GMT, Nilay Shroff wrote:
> This patch propose adding a new debugfs file entry for NVMe native
> multipath. As we know NVMe native multipath today supports three different
> io-policies (numa, round-robin and queue-depth) for selecting optimal I/O
> path and forwarding data. However we don't have yet any visibility to find
> the I/O path being selected by NVMe native multipath code.
> 
> IMO, it'd be nice to have this visibility information available under 
> debugfs which could help a user to validate the I/O path being chosen is 
> optimal for a given io policy. This patch propose adding a debugfs file 
> for each head disk node on the system. The proposal is to create a file 
> named "multipath" under "/sys/kernel/debug/nvmeXnY/".
> 
> Please find below output generated with this patch applied on a system 
> with a multi-controller PCIe NVMe disk attached to it. This system is also
> an NVMf-TCP host which is connected to NVMf-TCP target over two NIC cards. 
> This system has two numa nodes online when the below output was
> captured:

Wouldn't it make sense to extend nvme-cli instead adding additional
debugfs entries to the kernel, e.g. extending show-topology?



More information about the Linux-nvme mailing list