dm-multipath low performance with blk-mq

Mike Snitzer snitzer at redhat.com
Wed Jan 27 10:16:56 PST 2016


On Wed, Jan 27 2016 at 12:51pm -0500,
Jens Axboe <axboe at kernel.dk> wrote:

> On 01/27/2016 10:48 AM, Mike Snitzer wrote:
> >
> >BTW, I _cannot_ get null_blk to come even close to your reported 1500K+
> >IOPs on 2 "fast" systems I have access to.  Which arguments are you
> >loading the null_blk module with?
> >
> >I've been using:
> >modprobe null_blk gb=4 bs=4096 nr_devices=1 queue_mode=2 submit_queues=12
> >
> >On my 1 system is a 12 core single socket, single NUMA node with 12G of
> >memory, I can only get ~500K read IOPs and ~85K write IOPs.
> >
> >On another much larger system with 72 cores and 4 NUMA nodes with 128G
> >of memory, I can only get ~310K read IOPs and ~175K write IOPs.
> 
> Look at the completion method (irqmode) and completion time
> (completion_nsec).

OK, I found that queue_mode=0 (bio-based) is _much_ faster than blk-mq
(2, the default).  Improving to ~950K read IOPs and ~675K write IOPs (on
the single numa node system).

Default for irqmode is 1 (softirq).  2 (timer) yields poor results.  0
(none) seems slightly slower than 1.

And if I use completion_nsec=1 I can bump up to ~990K read IOPs.

Seems the best, for IOPs, so far on this system is with:
modprobe null_blk gb=4 bs=4096 nr_devices=1 queue_mode=0 irqmode=1 completion_nsec=1 submit_queues=4



More information about the Linux-nvme mailing list