[RFC PATCH] dm: fix excessive dm-mq context switching

Mike Snitzer snitzer at redhat.com
Tue Feb 9 06:55:47 PST 2016


On Tue, Feb 09 2016 at  2:50am -0500,
Hannes Reinecke <hare at suse.de> wrote:

> On 02/07/2016 06:20 PM, Mike Snitzer wrote:
> > On Sun, Feb 07 2016 at 11:54am -0500,
> > Sagi Grimberg <sagig at dev.mellanox.co.il> wrote:
> > 
> >>
> >>>> If so, can you check with e.g.
> >>>> perf record -ags -e LLC-load-misses sleep 10 && perf report whether this
> >>>> workload triggers perhaps lock contention ? What you need to look for in
> >>>> the perf output is whether any functions occupy more than 10% CPU time.
> >>>
> >>> I will, thanks for the tip!
> >>
> >> The perf report is very similar to the one that started this effort..
> >>
> >> I'm afraid we'll need to resolve the per-target m->lock in order
> >> to scale with NUMA...
> > 
> > Could be.  Just for testing, you can try the 2 topmost commits I've put
> > here (once applied both __multipath_map and multipath_busy won't have
> > _any_ locking.. again, very much test-only):
> > 
> > http://git.kernel.org/cgit/linux/kernel/git/snitzer/linux.git/log/?h=devel2
> > 
> So, I gave those patches a spin.
> Sad to say, they do _not_ resolve the issue fully.
>
> My testbed (2 paths per LUN, 40 CPUs, 4 cores) yields 505k IOPs with
> those patches.

That isn't a surprise.  We knew the m->lock spinlock contention to be a
problem.  And NUMA makes it even worse.

> Using a single path (without those patches, but still running
> multipath on top of that path) the same testbed yields 550k IOPs.
> Which very much smells like a lock contention ...
> We do get a slight improvement, though; without those patches I
> could only get about 350k IOPs. But still, I would somehow expect 2
> paths to be faster than just one ..

https://www.redhat.com/archives/dm-devel/2016-February/msg00036.html

hint hint...



More information about the Linux-nvme mailing list