[PATCH 1/2] qspinlock: Ensure writes are pushed out of core write buffer
Peter Zijlstra
peterz at infradead.org
Wed Jan 27 17:43:52 EST 2021
On Wed, Jan 27, 2021 at 09:01:08PM +0100, Alexander A Sverdlin wrote:
> From: Alexander Sverdlin <alexander.sverdlin at nokia.com>
>
> Ensure writes are pushed out of core write buffer to prevent waiting code
> on another cores from spinning longer than necessary.
Our smp_wmb() as defined does not have that property. You're relying on
some arch specific details which do not belong in common code.
> diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
> index 5e10153..10e497a 100644
> --- a/kernel/locking/mcs_spinlock.h
> +++ b/kernel/locking/mcs_spinlock.h
> @@ -89,6 +89,11 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
> return;
> }
> WRITE_ONCE(prev->next, node);
> + /*
> + * This is necessary to make sure that the corresponding "while" in the
> + * mcs_spin_unlock() doesn't loop forever
> + */
This comment is utterly inadequate, since it does not describe an
explicit ordering between two (or more) stores.
> + smp_wmb();
>
> /* Wait until the lock holder passes the lock down. */
> arch_mcs_spin_lock_contended(&node->locked);
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index cbff6ba..577fe01 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -469,6 +469,12 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>
> /* Link @node into the waitqueue. */
> WRITE_ONCE(prev->next, node);
> + /*
> + * This is necessary to make sure that the corresponding
> + * smp_cond_load_relaxed() below (running on another core)
> + * doesn't spin forever.
> + */
> + smp_wmb();
That's insane, cache coherency should not allow that to happen in the
first place. Our smp_wmb() cannot help with that.
> pv_wait_node(node, prev);
> arch_mcs_spin_lock_contended(&node->locked);
More information about the linux-arm-kernel
mailing list