[PATCH 1/2] qspinlock: Ensure writes are pushed out of core write buffer

Alexander Sverdlin alexander.sverdlin at nokia.com
Thu Jan 28 02:42:03 EST 2021


Hi!

On 27/01/2021 23:43, Peter Zijlstra wrote:
> On Wed, Jan 27, 2021 at 09:01:08PM +0100, Alexander A Sverdlin wrote:
>> From: Alexander Sverdlin <alexander.sverdlin at nokia.com>
>>
>> Ensure writes are pushed out of core write buffer to prevent waiting code
>> on another cores from spinning longer than necessary.
> Our smp_wmb() as defined does not have that property. You're relying on
> some arch specific details which do not belong in common code.

Yes, my intention was SYNCW on Octeon, which by accident is smp_wmb().
Do you think that the core write buffer is only Octeon feature and there
will be no others?

Should I re-implement arch_mcs_spin_lock_contended() for Octeon only,
as it has been done for ARM?

>> diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
>> index 5e10153..10e497a 100644
>> --- a/kernel/locking/mcs_spinlock.h
>> +++ b/kernel/locking/mcs_spinlock.h
>> @@ -89,6 +89,11 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
>>  		return;
>>  	}
>>  	WRITE_ONCE(prev->next, node);
>> +	/*
>> +	 * This is necessary to make sure that the corresponding "while" in the
>> +	 * mcs_spin_unlock() doesn't loop forever
>> +	 */
> This comment is utterly inadequate, since it does not describe an
> explicit ordering between two (or more) stores.
> 
>> +	smp_wmb();
>>  
>>  	/* Wait until the lock holder passes the lock down. */
>>  	arch_mcs_spin_lock_contended(&node->locked);
>> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
>> index cbff6ba..577fe01 100644
>> --- a/kernel/locking/qspinlock.c
>> +++ b/kernel/locking/qspinlock.c
>> @@ -469,6 +469,12 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>>  
>>  		/* Link @node into the waitqueue. */
>>  		WRITE_ONCE(prev->next, node);
>> +		/*
>> +		 * This is necessary to make sure that the corresponding
>> +		 * smp_cond_load_relaxed() below (running on another core)
>> +		 * doesn't spin forever.
>> +		 */
>> +		smp_wmb();
> That's insane, cache coherency should not allow that to happen in the
> first place. Our smp_wmb() cannot help with that.
> 

-- 
Best regards,
Alexander Sverdlin.



More information about the linux-arm-kernel mailing list