[PATCH V8 10/10] csky: Add qspinlock support

Waiman Long longman at redhat.com
Sun Jul 24 19:08:34 PDT 2022


On 7/24/22 08:25, guoren at kernel.org wrote:
> From: Guo Ren <guoren at linux.alibaba.com>
>
> Enable qspinlock by the requirements mentioned in a8ad07e5240c9
> ("asm-generic: qspinlock: Indicate the use of mixed-size atomics").
>
> C-SKY only has "ldex/stex" for all atomic operations. So csky give a
> strong forward guarantee for "ldex/stex." That means when ldex grabbed
> the cache line into $L1, it would block other cores from snooping the
> address with several cycles.
>
> Signed-off-by: Guo Ren <guoren at linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren at kernel.org>
> ---
>   arch/csky/Kconfig               | 16 ++++++++++++++++
>   arch/csky/include/asm/Kbuild    |  2 ++
>   arch/csky/include/asm/cmpxchg.h | 20 ++++++++++++++++++++
>   3 files changed, 38 insertions(+)
>
> diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
> index dfdb436b6078..09f7d1f06bca 100644
> --- a/arch/csky/Kconfig
> +++ b/arch/csky/Kconfig
> @@ -354,6 +354,22 @@ config HAVE_EFFICIENT_UNALIGNED_STRING_OPS
>   	  Say Y here to enable EFFICIENT_UNALIGNED_STRING_OPS. Some CPU models could
>   	  deal with unaligned access by hardware.
>   
> +choice
> +	prompt "C-SKY spinlock type"
> +	default CSKY_TICKET_SPINLOCKS
> +
> +config CSKY_TICKET_SPINLOCKS
> +	bool "Using ticket spinlock"
> +
> +config CSKY_QUEUED_SPINLOCKS
> +	bool "Using queued spinlock"
> +	depends on SMP
> +	select ARCH_USE_QUEUED_SPINLOCKS
> +	help
> +	  Make sure your micro arch LL/SC has a strong forward progress guarantee.
> +	  Otherwise, stay at ticket-lock/combo-lock.

"combo-lock"? It is a cut-and-paste error. Right?

Cheers,
Longman




More information about the linux-riscv mailing list