[bug report] WARNING: possible circular locking at: rdma_destroy_id+0x17/0x20 [rdma_cm] triggered by blktests nvmeof-mp/002

Jason Gunthorpe jgg at ziepe.ca
Tue May 31 05:35:44 PDT 2022


On Sat, May 28, 2022 at 09:00:16PM +0200, Bart Van Assche wrote:
> On 5/27/22 14:52, Jason Gunthorpe wrote:
> > On Wed, May 25, 2022 at 08:50:52PM +0200, Bart Van Assche wrote:
> > > On 5/25/22 13:01, Sagi Grimberg wrote:
> > > > iirc this was reported before, based on my analysis lockdep is giving
> > > > a false alarm here. The reason is that the id_priv->handler_mutex cannot
> > > > be the same for both cm_id that is handling the connect and the cm_id
> > > > that is handling the rdma_destroy_id because rdma_destroy_id call
> > > > is always called on a already disconnected cm_id, so this deadlock
> > > > lockdep is complaining about cannot happen.
> > > > 
> > > > I'm not sure how to settle this.
> > > 
> > > If the above is correct, using lockdep_register_key() for
> > > id_priv->handler_mutex instead of a static key should make the lockdep false
> > > positive disappear.
> > 
> > That only works if you can detect actual different lock classes during
> > lock creation. It doesn't seem applicable in this case.
> 
> Why doesn't it seem applicable in this case? The default behavior of
> mutex_init() and related initialization functions is to create one lock
> class per synchronization object initialization caller.
> lockdep_register_key() can be used to create one lock class per
> synchronization object instance. I introduced lockdep_register_key() myself
> a few years ago.

I don't think this should be used to create one key per instance of
the object which would be required here. The overhead would be very
high.

> My opinion is that holding *any* lock around the invocation of a callback
> function is an antipattern, in other words, something that never should be
> done. 

Then you invariably have an API that will be full of races because we
do need to run the callbacks synchronously with the FSM.  Many
syzkaller bugs were fixed by adding this serialization.

> Has it been considered to rework the RDMA/CM such that no locks are held
> around the invocation of callback functions like the event_handler
> callback?

IMHO it is too difficult, maybe impossible.

> There are other mechanisms to report events from one software layer
> (RDMA/CM) to a higher software layer (ULP), e.g. a linked list with event
> information. The RDMA/CM could queue events onto that list and the ULP can
> dequeue events from that list.

Then it is not synchronous, the point of these callbacks is to be
synchronous. If a ULP would like and can tolerate a decoupled
operation then it can implement an event queue, but we can't generally
state that all ULPs are safe to be asynchronous for all events.

This also doesn't actually solve anything because we still have races
with destroying the ID while the event queue is refering to the cm_id,
or while the event queue consumer is processing it. This still
requires locks to solve, even if they may be weaker rw/locks or
refcounting locks.

> [1] Ousterhout, John. "Why threads are a bad idea (for most purposes)." In
> Presentation given at the 1996 Usenix Annual Technical Conference, vol. 5.
> 1996.

Indeed, but we have threads here and we can't wish them away.

Jason



More information about the Linux-nvme mailing list