[PATCH v5] nvme: multipath: Implemented new iopolicy "queue-depth"
Keith Busch
kbusch at kernel.org
Wed May 22 10:32:02 PDT 2024
On Wed, May 22, 2024 at 12:54:06PM -0400, John Meneghini wrote:
> From: "Ewan D. Milne" <emilne at redhat.com>
>
> The round-robin path selector is inefficient in cases where there is a
> difference in latency between paths. In the presence of one or more
> high latency paths the round-robin selector continues to use the high
> latency path equally. This results in a bias towards the highest latency
> path and can cause a significant decrease in overall performance as IOs
> pile on the highest latency path. This problem is acute with NVMe-oF
> controllers.
>
> The queue-depth policy instead sends I/O requests down the path with the
> least amount of requests in its request queue. Paths with lower latency
> will clear requests more quickly and have less requests in their queues
> compared to higher latency paths. The goal of this path selector is to
> make more use of lower latency paths which will bring down overall IO
> latency and increase throughput and performance.
I'm okay with this as-is, though I don't think you need either
atomic_set() calls.
Christoph, Sagi, 6.10 merge window is still open and this has been
iterating long before that. Any objection?
More information about the Linux-nvme
mailing list