[PATCH 1/3] blk-mq: add new API of blk_mq_hctx_set_fq_lock_class
Ming Lei
ming.lei at redhat.com
Mon Nov 16 20:16:10 EST 2020
On Tue, Nov 17, 2020 at 09:05:04AM +0800, Ming Lei wrote:
> On Mon, Nov 16, 2020 at 06:26:58PM +0100, Christoph Hellwig wrote:
> > On Thu, Nov 12, 2020 at 03:55:24PM +0800, Ming Lei wrote:
> > > flush_end_io() may be called recursively from some driver, such as
> > > nvme-loop, so lockdep may complain 'possible recursive locking'.
> > > Commit b3c6a5997541("block: Fix a lockdep complaint triggered by
> > > request queue flushing") tried to address this issue by assigning
> > > dynamically allocated per-flush-queue lock class. This solution
> > > adds synchronize_rcu() for each hctx's release handler, and causes
> > > horrible SCSI MQ probe delay(more than half an hour on megaraid sas).
> > >
> > > Add new API of blk_mq_hctx_set_fq_lock_class() for these drivers, so
> > > we just need to use driver specific lock class for avoiding the
> > > lockdep warning of 'possible recursive locking'.
> >
> > I'd turn this into an inline function to avoid the (although very
> > minimal) cost when LOCKDEP is not enabled.
>
> blk_mq_hctx_set_fq_lock_class() is just one-shot thing, do you really
> care the cost?
Forget to mention, 'blk_flush_queue' is one private structure inside block
layer, so we can't define as inline.
thanks,
Ming
More information about the Linux-nvme
mailing list