[PATCH 10/10] locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb()

Will Deacon will.deacon at arm.com
Fri Apr 6 04:34:36 PDT 2018


On Thu, Apr 05, 2018 at 07:28:08PM +0200, Peter Zijlstra wrote:
> On Thu, Apr 05, 2018 at 05:59:07PM +0100, Will Deacon wrote:
> > @@ -340,12 +341,17 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> >  		goto release;
> >  
> >  	/*
> > +	 * Ensure that the initialisation of @node is complete before we
> > +	 * publish the updated tail and potentially link @node into the
> > +	 * waitqueue.
> > +	 */
> > +	smp_wmb();
> 
> Maybe an explicit note to where the matching barrier lives..

Oh man, that's not a simple thing to write: there isn't a matching barrier!

Instead, we rely on dependency ordering for two cases:

  * We access a node by decoding the tail we get back from the xchg

- or -

  * We access a node by following our own ->next pointer

I could say something like:

  "Pairs with dependency ordering from both xchg_tail and explicit
   dereferences of node->next"

but it's a bit cryptic :(

Will



More information about the linux-arm-kernel mailing list