[PATCH 2/3] nvme: multipath: only update ctrl->nr_active when using queue-depth iopolicy

Ewan Milne emilne at redhat.com
Wed Nov 8 10:38:10 PST 2023


I did attempt an implementation with percpu counters but I did not
see enough of a benefit.  I also tried updating the counters for only
a fraction of the requests, what I observed is that it does not work
nearly as well as counting them all, unfortunately.

-Ewan

On Wed, Nov 8, 2023 at 3:10 AM Christoph Hellwig <hch at infradead.org> wrote:
>
> On Tue, Nov 07, 2023 at 02:53:48PM -0700, Keith Busch wrote:
> > On Tue, Nov 07, 2023 at 04:23:30PM -0500, Ewan D. Milne wrote:
> > > The atomic updates of ctrl->nr_active are unnecessary when using
> > > numa or round-robin iopolicy, so avoid that cost on a per-request basis.
> > > Clear nr_active when changing iopolicy and do not decrement below zero.
> > > (This handles changing the iopolicy while requests are in flight.)
> >
> > Oh, here's restricting it to that policy. Any reason not to fold it in
> > the first one?
>
> It should, and I agree with all the other comments.
>
> But I'm also pretty deeply unhappy with the whole thing.  This is a
> controller-wise atomic taken for every I/O.  How slow are the subsystems
> people want to use it for?  And is a global max active really the
> right measure, or would e a per-cpu, or at least batched per-cpu as
> used by the percpu counters by a better option?
>
>




More information about the Linux-nvme mailing list