[PATCH v2 1/3] arm64: spinlock: order spin_{is_locked, unlock_wait} against local locks

Will Deacon will.deacon at arm.com
Wed Jun 8 09:25:37 PDT 2016


spin_is_locked has grown two very different use-cases:

(1) [The sane case] API functions may require a certain lock to be held
    by the caller and can therefore use spin_is_locked as part of an
    assert statement in order to verify that the lock is indeed held.
    For example, usage of assert_spin_locked.

(2) [The insane case] There are two locks, where a CPU takes one of the
    locks and then checks whether or not the other one is held before
    accessing some shared state. For example, the "optimized locking" in
    ipc/sem.c.

In the latter case, the sequence looks like:

  spin_lock(&sem->lock);
  if (!spin_is_locked(&sma->sem_perm.lock))
    /* Access shared state */

and requires that the spin_is_locked check is ordered after taking the
sem->lock. Unfortunately, since our spinlocks are implemented using a
LDAXR/STXR sequence, the read of &sma->sem_perm.lock can be speculated
before the STXR and consequently return a stale value.

Whilst this hasn't been seen to cause issues in practice, PowerPC fixed
the same issue in 51d7d5205d33 ("powerpc: Add smp_mb() to
arch_spin_is_locked()") and, although we did something similar for
spin_unlock_wait in d86b8da04dfa ("arm64: spinlock: serialise
spin_unlock_wait against concurrent lockers") that doesn't actually take
care of ordering against local acquisition of a different lock.

This patch adds an smp_mb() to the start of our arch_spin_is_locked and
arch_spin_unlock_wait routines to ensure that the lock value is always
loaded after any other locks have been taken by the current CPU.

Reported-by: Peter Zijlstra <peterz at infradead.org>
Signed-off-by: Will Deacon <will.deacon at arm.com>
---
 arch/arm64/include/asm/spinlock.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index fc9682bfe002..aac64d55cb22 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -31,6 +31,12 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 	unsigned int tmp;
 	arch_spinlock_t lockval;
 
+	/*
+	 * Ensure prior spin_lock operations to other locks have completed
+	 * on this CPU before we test whether "lock" is locked.
+	 */
+	smp_mb();
+
 	asm volatile(
 "	sevl\n"
 "1:	wfe\n"
@@ -148,6 +154,7 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
+	smp_mb(); /* See arch_spin_unlock_wait */
 	return !arch_spin_value_unlocked(READ_ONCE(*lock));
 }
 
-- 
2.1.4




More information about the linux-arm-kernel mailing list