[PATCH 1/3] nvme: multipath: Implemented new iopolicy "queue-depth"

Ewan Milne emilne at redhat.com
Tue Nov 7 13:56:14 PST 2023


Yes, we have some graphs.  John M. presented them at ALPSS and there were
some earlier ones at LSF/MM.  I'll see if I can put up the latest set
for download.

The basic issue is that with round-robin, requests for most/all of the
tagset space
can end up on a path that is responding slowly, so we see a
significant imbalance
in path utilization.


-Ewan

On Tue, Nov 7, 2023 at 4:46 PM Chaitanya Kulkarni <chaitanyak at nvidia.com> wrote:
>
> On 11/7/23 13:23, Ewan D. Milne wrote:
> > The existing iopolicies are inefficient in some cases, such as
> > the presence of a path with high latency. The round-robin
> > policy would use that path equally with faster paths, which
> > results in sub-optimal performance.
>
> do you have performance numbers for such case ?
>
> > The queue-depth policy instead sends I/O requests down the path
> > with the least amount of requests in its request queue. Paths
> > with lower latency will clear requests more quickly and have less
> > requests in their queues compared to "bad" paths. The aim is to
> > use those paths the most to bring down overall latency.
> >
> > This implementation adds an atomic variable to the nvme_ctrl
> > struct to represent the queue depth. It is updated each time a
> > request specific to that controller starts or ends.
> >
> > [edm: patch developed by Thomas Song @ Pure Storage, fixed whitespace
> >        and compilation warnings, updated MODULE_PARM description, and
> >        fixed potential issue with ->current_path[] being used]
> >
> > Co-developed-by: Thomas Song <tsong at purestorage.com>
> > Signed-off-by: Ewan D. Milne <emilne at redhat.com>
> > ---
> >
>
> any performance comparison that shows the difference ?
>
> -ck
>
>




More information about the Linux-nvme mailing list