[PATCH RFC 0/1] Add visibility for native NVMe miltipath using debugfs
Keith Busch
kbusch at kernel.org
Wed Jul 24 07:37:12 PDT 2024
On Mon, Jul 22, 2024 at 03:01:08PM +0530, Nilay Shroff wrote:
> # cat /sys/kernel/debug/block/nvme1n1/multipath
> io-policy: queue-depth
> io-path:
> --------
> node path ctrl qdepth ana-state
> 2 nvme1c1n1 nvme1 1328 optimized
> 2 nvme1c3n1 nvme3 1324 optimized
> 3 nvme1c1n1 nvme1 1328 optimized
> 3 nvme1c3n1 nvme3 1324 optimized
>
> The above output was captured while I/O was running and accessing
> namespace nvme1n1. From the above output, we see that iopolicy is set to
> "queue-depth". When we have I/O workload running on numa node 2, accessing
> namespace "nvme1n1", the I/O path nvme1c1n1/nvme1 has queue depth of 1328
> and another I/O path nvme1c3n1/nvme3 has queue depth of 1324. Both paths
> are optimized and seems that both paths are equally utilized for
> forwarding I/O.
You can get the outstanding queue-depth from iostats too, and that
doesn't rely on queue-depth io policy. It does, however, require stats
are enabled, but that's probably a more reasonable given than an io
policy.
> The same could be said for workload running on numa
> node 3.
The output for all numa nodes will be the same regardless of which node
a workload is running on (the accounting isn't per-node), so I'm not
sure outputting qdepth again for each node is useful.
More information about the Linux-nvme
mailing list