dm-multipath low performance with blk-mq

Mike Snitzer snitzer at redhat.com
Thu Feb 4 06:09:59 PST 2016


On Thu, Feb 04 2016 at  8:58am -0500,
Hannes Reinecke <hare at suse.de> wrote:

> On 02/04/2016 02:54 PM, Mike Snitzer wrote:
> > On Thu, Feb 04 2016 at  1:54am -0500,
> > Hannes Reinecke <hare at suse.de> wrote:
> > 
> [ .. ]
> >> But anyway, I'll be looking at your patches.
> > 
> > Thanks, sadly none of the patches are going to fix the performance
> > problems but I do think they are a step forward.
> > 
> Hmm. I've got a slew of patches converting dm-mpath to use atomic_t
> and bitops; with that we should be able to move to rcu for path
> lookup and do away with most of the locking.
> Quite raw, though; drop me a mail if you're interested.

Hmm, ok I just switched m->lock from spinlock_t to rwlock_t, see:
https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.6&id=a5226e23a6958ac9b7ade13a983604c43d232c7d

So any patch you'd have in this area would need rebasing.  I'll gladly
look at what you have (even if it isn't rebased).  So yes please share.

(it could be that there isn't a big enough win associated with switching
to rwlock_t -- that we could get away without doing that particular
churn.. open to that if you think rwlock_t pointless given we'll take
the write lock after repeat_count drops to 0)



More information about the Linux-nvme mailing list