[PATCH 2/3] nvme: multipath: only update ctrl->nr_active when using queue-depth iopolicy
John Meneghini
jmeneghi at redhat.com
Wed Nov 8 08:58:38 PST 2023
On 11/8/23 03:09, Christoph Hellwig wrote:
> But I'm also pretty deeply unhappy with the whole thing. This is a controller-wise atomic taken for every I/O. How slow are the
> subsystems people want to use it for?
This would never be useful with PCIe subsystems which all have latency/response times in the 10s of microseconds range.
This is really only useful with Fibre channel and TCP attached subsystems. We have several enterprise class storage arrays here
in our lab at Red Hat - all with multiple 32GB FC and 100Gbps TCP connections. Based on our own private performance analysis of
these devices they have latency/response times in 10s of milliseconds range... at best... and the performance bottle necks are
in the storage arrays, not in the fabrics. Even some of the RDMA attached storage arrays are slow. These devices have trouble
scaling up performance... so they they scale-out instead, adding more and more paths and controllers to different domains in
their subsystems.
I am not concerned about the addition of one atomic counter with these use cases. In a grand scheme of things, when using an
nvme-of attached storage array, the trade offs are worth it. Besides, if you've dived into the TCP socket layer you'll see
there are plenty of atomic counters in the data path already. This is the main use case which needs QD scheduling - nvme/tcp -
where latency can build up in the connection as well as in the storage array.
/John
More information about the Linux-nvme
mailing list