dm-multipath low performance with blk-mq
Mike Snitzer
snitzer at redhat.com
Wed Feb 3 11:22:57 PST 2016
On Wed, Feb 03 2016 at 1:24pm -0500,
Mike Snitzer <snitzer at redhat.com> wrote:
> > Here are pictures of 'perf report' for perf datat collected using
> > 'perf record -ag -e cs'.
> >
> > Against null_blk:
> > http://people.redhat.com/msnitzer/perf-report-cs-null_blk.png
>
> if dm-mq nr_hw_queues=1 and null_blk nr_hw_queues=1
> cpu : usr=25.53%, sys=74.40%, ctx=1970, majf=0, minf=474
> if dm-mq nr_hw_queues=1 and null_blk nr_hw_queues=4
> cpu : usr=26.79%, sys=73.15%, ctx=2067, majf=0, minf=479
>
> > Against dm-mpath ontop of the same null_blk:
> > http://people.redhat.com/msnitzer/perf-report-cs-dm_mq.png
>
> if dm-mq nr_hw_queues=1 and null_blk nr_hw_queues=1
> cpu : usr=11.07%, sys=33.90%, ctx=667784, majf=0, minf=466
> if dm-mq nr_hw_queues=1 and null_blk nr_hw_queues=4
> cpu : usr=15.22%, sys=48.44%, ctx=2314901, majf=0, minf=466
I promise, this is my last reply to myself ;)
The above dm-mq results were _without_ using this commit:
"dm: don't blk_mq_run_hw_queues in blk-mq request completion"
https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.6&id=cc6ca783e8f0669112c5f4154f51a7cb17b76006
But with that commit I'm still seeing the high context switches:
if dm-mq nr_hw_queues=1 and null_blk nr_hw_queues=1
cpu : usr=11.78%, sys=36.11%, ctx=690262, majf=0, minf=470
if dm-mq nr_hw_queues=1 and null_blk nr_hw_queues=4
cpu : usr=15.62%, sys=49.95%, ctx=2425084, majf=0, minf=466
So running blk_mq_run_hw_queues (async to punt to kblockd) on dm-mq
request completion isn't the source of any of the accounted context
switches...
More information about the Linux-nvme
mailing list