[PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
Daniel Wagner
dwagner at suse.de
Mon Sep 1 09:36:29 PDT 2025
Hi Nilay,
On Mon, Sep 01, 2025 at 02:51:09PM +0530, Nilay Shroff wrote:
> Hi Daniel and Hannes,
>
> Just a gentle ping on this one...
>
> Do you agree with the reasoning I suggested for filtering
> columns based on iopolicy? If we all have agreement then
> I'd send out the next patchset with appropriate change.
I was waiting for Hannes input here as he was in discussion.
> >> But really, I'm not sure if we should print out values from the various
> >> I/O policies. For NUMA it probably makes sense, but for round-robin and
> >> queue-depths the values are extremely volatile, so I wonder what benefit
> >> for the user is here.
> >>
> >
> > I think the qdepth output could still be useful. For example, if I/Os are
> > queuing up on one path (perhaps because that path is slower), then the Qdepth
> > value might help indicate something unusual or explain why one path is being
> > chosen over another.
> >
> > That said, if we all agree that tools or scripts should ideally rely on JSON
> > output for parsing, then the tabular output could be simplified further:
> >
> > - For numa iopolicy: print <Nodes> and exclude <Qdepth>.
> > - For queue-depth iopolicy: print <Qdepth> and exclude <Nodes>.
> > - For round-robin iopolicy: exclude both <Nodes> and <Qdepth>.
Looks reasonable to me.
> > Does this sound reasonable? Or do we still want to avoid printing
> > <Qdepth> even for queue-depth iopolicy?
I am fine with printing the qdepth value as long it is documented what it
means. IIRC there are other tools which just show a snapshot for some
statistics.
BTW, some discussion on github regarding something like a
'monitor' feature: https://github.com/linux-nvme/nvme-cli/issues/2189
Might be something to which could be considered here as well.
Thanks,
Daniel
More information about the Linux-nvme
mailing list