[RFC PATCH] dm: fix excessive dm-mq context switching
Hannes Reinecke
hare at suse.de
Mon Feb 8 23:50:33 PST 2016
On 02/07/2016 06:20 PM, Mike Snitzer wrote:
> On Sun, Feb 07 2016 at 11:54am -0500,
> Sagi Grimberg <sagig at dev.mellanox.co.il> wrote:
>
>>
>>>> If so, can you check with e.g.
>>>> perf record -ags -e LLC-load-misses sleep 10 && perf report whether this
>>>> workload triggers perhaps lock contention ? What you need to look for in
>>>> the perf output is whether any functions occupy more than 10% CPU time.
>>>
>>> I will, thanks for the tip!
>>
>> The perf report is very similar to the one that started this effort..
>>
>> I'm afraid we'll need to resolve the per-target m->lock in order
>> to scale with NUMA...
>
> Could be. Just for testing, you can try the 2 topmost commits I've put
> here (once applied both __multipath_map and multipath_busy won't have
> _any_ locking.. again, very much test-only):
>
> http://git.kernel.org/cgit/linux/kernel/git/snitzer/linux.git/log/?h=devel2
>
So, I gave those patches a spin.
Sad to say, they do _not_ resolve the issue fully.
My testbed (2 paths per LUN, 40 CPUs, 4 cores) yields 505k IOPs with
those patches.
Using a single path (without those patches, but still running
multipath on top of that path) the same testbed yields 550k IOPs.
Which very much smells like a lock contention ...
We do get a slight improvement, though; without those patches I
could only get about 350k IOPs. But still, I would somehow expect 2
paths to be faster than just one ..
Cheers,
Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
hare at suse.de +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)
More information about the Linux-nvme
mailing list