[PATCH 03/10] locking/qspinlock: Kill cmpxchg loop when claiming lock from head of queue
Peter Zijlstra
peterz at infradead.org
Thu Apr 5 10:19:12 PDT 2018
On Thu, Apr 05, 2018 at 05:59:00PM +0100, Will Deacon wrote:
> +
> + /* In the PV case we might already have _Q_LOCKED_VAL set */
> + if ((val & _Q_TAIL_MASK) == tail) {
> /*
> * The smp_cond_load_acquire() call above has provided the
> + * necessary acquire semantics required for locking.
> */
> old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL);
> if (old == val)
> + goto release; /* No contention */
> }
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -464,8 +464,7 @@ void queued_spin_lock_slowpath(struct qs
* The smp_cond_load_acquire() call above has provided the
* necessary acquire semantics required for locking.
*/
- old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL);
- if (old == val)
+ if (atomic_try_cmpxchg_release(&lock->val, &val, _Q_LOCKED_VAL))
goto release; /* No contention */
}
Does that also work for you? It would generate slightly better code for
x86 (not that it would matter much on this path).
More information about the linux-arm-kernel
mailing list