[RFC PATCH] dm: fix excessive dm-mq context switching
Mike Snitzer
snitzer at redhat.com
Mon Feb 8 06:34:00 PST 2016
On Mon, Feb 08 2016 at 7:21am -0500,
Sagi Grimberg <sagig at dev.mellanox.co.il> wrote:
>
> >>The perf report is very similar to the one that started this effort..
> >>
> >>I'm afraid we'll need to resolve the per-target m->lock in order
> >>to scale with NUMA...
> >
> >Could be. Just for testing, you can try the 2 topmost commits I've put
> >here (once applied both __multipath_map and multipath_busy won't have
> >_any_ locking.. again, very much test-only):
> >
> >http://git.kernel.org/cgit/linux/kernel/git/snitzer/linux.git/log/?h=devel2
>
> Hi Mike,
>
> So I still don't see the IOPs scale like I expected. With these two
> patches applied I see ~670K IOPs while the perf output is different
> and does not indicate a clear lock contention.
Right, perf (with default events) isn't the right tool to track this down.
But I'm seeing something that speaks to you not running with the first
context switching fix (which seems odd):
> - 2.07% ksoftirqd/6 [kernel.kallsyms] [k]
> blk_mq_run_hw_queues
> - blk_mq_run_hw_queues
> - 99.70% rq_completed
> dm_done
> dm_softirq_done
> blk_done_softirq
> + __do_softirq
As you can see here:
https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-next&id=a5b835282422ec41991c1dbdb88daa4af7d166d2
rq_completed() shouldn't be calling blk_mq_run_hw_queues() with the
latest code.
Please triple check you have the latest code, e.g.:
git diff snitzer/devel2
More information about the Linux-nvme
mailing list