[PATCH 02/10] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath

Peter Zijlstra peterz at infradead.org
Thu Apr 5 10:07:06 PDT 2018


On Thu, Apr 05, 2018 at 05:58:59PM +0100, Will Deacon wrote:
> The qspinlock locking slowpath utilises a "pending" bit as a simple form
> of an embedded test-and-set lock that can avoid the overhead of explicit
> queuing in cases where the lock is held but uncontended. This bit is
> managed using a cmpxchg loop which tries to transition the uncontended
> lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).
> 
> Unfortunately, the cmpxchg loop is unbounded and lockers can be starved
> indefinitely if the lock word is seen to oscillate between unlocked
> (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are
> able to take the lock in the cmpxchg loop without queuing and pass it
> around amongst themselves.
> 
> This patch fixes the problem by unconditionally setting _Q_PENDING_VAL
> using atomic_fetch_or, 

Of course, LL/SC or cmpxchg implementations of fetch_or do not in fact
get anything from this ;-)



More information about the linux-arm-kernel mailing list