[PATCH RFC 0/1] Add visibility for native NVMe miltipath using debugfs
Nilay Shroff
nilay at linux.ibm.com
Mon Jul 22 22:18:02 PDT 2024
On 7/22/24 19:48, Daniel Wagner wrote:
> On Mon, Jul 22, 2024 at 03:01:08PM GMT, Nilay Shroff wrote:
>> This patch propose adding a new debugfs file entry for NVMe native
>> multipath. As we know NVMe native multipath today supports three different
>> io-policies (numa, round-robin and queue-depth) for selecting optimal I/O
>> path and forwarding data. However we don't have yet any visibility to find
>> the I/O path being selected by NVMe native multipath code.
>>
>> IMO, it'd be nice to have this visibility information available under
>> debugfs which could help a user to validate the I/O path being chosen is
>> optimal for a given io policy. This patch propose adding a debugfs file
>> for each head disk node on the system. The proposal is to create a file
>> named "multipath" under "/sys/kernel/debug/nvmeXnY/".
>>
>> Please find below output generated with this patch applied on a system
>> with a multi-controller PCIe NVMe disk attached to it. This system is also
>> an NVMf-TCP host which is connected to NVMf-TCP target over two NIC cards.
>> This system has two numa nodes online when the below output was
>> captured:
>
> Wouldn't it make sense to extend nvme-cli instead adding additional
> debugfs entries to the kernel, e.g. extending show-topology?
>
Yeah we may extend nvme-cli to print this(multipathing) information however from
where would nvme-cli retrieve that information? AFAIK, today this multipath information
is not exported by NVMe driver. So we have to first make this information available from
driver either through sysfs or ioctl and then nvme-cli could parse it and show it to the
user. If everyone thinks that it's worth extending nvme-cli so that it could display this
information then yes we can certainly implement it. Please suggest.
Thanks,
--Nilay
More information about the Linux-nvme
mailing list