[PATCH 1/3] nvme: multipath: Implemented new iopolicy "queue-depth"

Chaitanya Kulkarni chaitanyak at nvidia.com
Tue Nov 7 13:46:12 PST 2023


On 11/7/23 13:23, Ewan D. Milne wrote:
> The existing iopolicies are inefficient in some cases, such as
> the presence of a path with high latency. The round-robin
> policy would use that path equally with faster paths, which
> results in sub-optimal performance.

do you have performance numbers for such case ?

> The queue-depth policy instead sends I/O requests down the path
> with the least amount of requests in its request queue. Paths
> with lower latency will clear requests more quickly and have less
> requests in their queues compared to "bad" paths. The aim is to
> use those paths the most to bring down overall latency.
>
> This implementation adds an atomic variable to the nvme_ctrl
> struct to represent the queue depth. It is updated each time a
> request specific to that controller starts or ends.
>
> [edm: patch developed by Thomas Song @ Pure Storage, fixed whitespace
>        and compilation warnings, updated MODULE_PARM description, and
>        fixed potential issue with ->current_path[] being used]
>
> Co-developed-by: Thomas Song <tsong at purestorage.com>
> Signed-off-by: Ewan D. Milne <emilne at redhat.com>
> ---
>   

any performance comparison that shows the difference ?

-ck




More information about the Linux-nvme mailing list