[PATCH V10 07/19] riscv: qspinlock: errata: Introduce ERRATA_THEAD_QSPINLOCK

Guo Ren guoren at kernel.org
Mon Aug 7 19:12:15 PDT 2023


On Mon, Aug 07, 2023 at 01:23:34AM -0400, Stefan O'Rear wrote:
> On Wed, Aug 2, 2023, at 12:46 PM, guoren at kernel.org wrote:
> > From: Guo Ren <guoren at linux.alibaba.com>
> >
> > According to qspinlock requirements, RISC-V gives out a weak LR/SC
> > forward progress guarantee which does not satisfy qspinlock. But
> > many vendors could produce stronger forward guarantee LR/SC to
> > ensure the xchg_tail could be finished in time on any kind of
> > hart. T-HEAD is the vendor which implements strong forward
> > guarantee LR/SC instruction pairs, so enable qspinlock for T-HEAD
> > with errata help.
> >
> > T-HEAD early version of processors has the merge buffer delay
> > problem, so we need ERRATA_WRITEONCE to support qspinlock.
> >
> > Signed-off-by: Guo Ren <guoren at linux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren at kernel.org>
> > ---
> >  arch/riscv/Kconfig.errata              | 13 +++++++++++++
> >  arch/riscv/errata/thead/errata.c       | 24 ++++++++++++++++++++++++
> >  arch/riscv/include/asm/errata_list.h   | 20 ++++++++++++++++++++
> >  arch/riscv/include/asm/vendorid_list.h |  3 ++-
> >  arch/riscv/kernel/cpufeature.c         |  3 ++-
> >  5 files changed, 61 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/riscv/Kconfig.errata b/arch/riscv/Kconfig.errata
> > index 4745a5c57e7c..eb43677b13cc 100644
> > --- a/arch/riscv/Kconfig.errata
> > +++ b/arch/riscv/Kconfig.errata
> > @@ -96,4 +96,17 @@ config ERRATA_THEAD_WRITE_ONCE
> > 
> >  	  If you don't know what to do here, say "Y".
> > 
> > +config ERRATA_THEAD_QSPINLOCK
> > +	bool "Apply T-Head queued spinlock errata"
> > +	depends on ERRATA_THEAD
> > +	default y
> > +	help
> > +	  The T-HEAD C9xx processors implement strong fwd guarantee LR/SC to
> > +	  match the xchg_tail requirement of qspinlock.
> > +
> > +	  This will apply the QSPINLOCK errata to handle the non-standard
> > +	  behavior via using qspinlock instead of ticket_lock.
> > +
> > +	  If you don't know what to do here, say "Y".
> 
> If this is to be applied, I would like to see a detailed explanation somewhere,
> preferably with citations, of:
> 
> (a) The memory model requirements for qspinlock
These were written in commit: a8ad07e5240 ("asm-generic: qspinlock: Indicate the use of
mixed-size atomics"). For riscv, the most controversial point is xchg_tail()
implementation for native queued spinlock.

> (b) Why, with arguments, RISC-V does not architecturally meet (a)
In the spec "Eventual Success of Store-Conditional Instructions":
"By contrast, if other harts or devices continue to write to that reservation set, it is
not guaranteed that any hart will exit its LR/SC loop."

1. The arch_spinlock_t is 32-bit width, and it contains LOCK_PENDING
   part and IDX_TAIL part.
    - LOCK:     lock holder
    - PENDING:  next waiter (Only once per contended situation)
    - IDX:      nested context (normal, hwirq, softirq, nmi)
    - TAIL:     last contended cpu
   The xchg_tail operate on IDX_TAIL part, so there is no guarantee on "NO"
   "other harts or devices continue to write to that reservation set".

2. When you do lock torture test, you may see a long contended ring queue:
                                                                xchg_tail
                                                                    +-----> CPU4 (big core)
                                                                    |
   CPU3 (lock holder) -> CPU1 (mcs queued) -> CPU2 (mcs queued) ----+-----> CPU0 (little core)
    |                                                               |
    |                                                               +-----> CPU5 (big core)
    |                                                               |
    +--locktorture release lock (spin_unlock) and spin_lock again --+-----> CPU3 (big core)

    If CPU0 doesn't have a strong fwd guarantee, xhg_tail is consistently failed.

> (c) Why, with arguments, T-HEAD C9xx meets (a)
> (d) Why at least one other architecture which defines ARCH_USE_QUEUED_SPINLOCKS
>     meets (a)
I can't give the C9xx microarch implementation detail. But many
open-source riscv cores have provided strong forward progress guarantee
LR/SC implementation [1] [2]. But I would say these implementations are
too rude, which makes LR send a cacheline unique interconnect request.
It satisfies xchg_tail but not cmpxchg & cond_load. CPU vendors should
carefully consider your LR/SC fwd guarantee implementation.

[1]: https://github.com/riscv-boom/riscv-boom/blob/v3.0.0/src/main/scala/lsu/dcache.scala#L650
[2]: https://github.com/OpenXiangShan/XiangShan/blob/v1.0/src/main/scala/xiangshan/cache/MainPipe.scala#L470

> 
> As far as I can tell, the RISC-V guarantees concerning constrained LR/SC loops
> (livelock freedom but no starvation freedom) are exactly the same as those in
> Armv8 (as of 0487F.c) for equivalent loops, and xchg_tail compiles to a
> constrained LR/SC loop with guaranteed eventual success (with -O1).  Clearly you
> disagree; I would like to see your perspective.
For Armv8, I would use LSE for the lock-contended scenario. Ref this
commit 0ea366f5e1b6: ("arm64: atomics: prefetch the destination word for
write prior to stxr").

> 
> -s
> 
> > +
> >  endmenu # "CPU errata selection"
> > diff --git a/arch/riscv/errata/thead/errata.c b/arch/riscv/errata/thead/errata.c
> > index 881729746d2e..d560dc45c0e7 100644
> > --- a/arch/riscv/errata/thead/errata.c
> > +++ b/arch/riscv/errata/thead/errata.c
> > @@ -86,6 +86,27 @@ static bool errata_probe_write_once(unsigned int stage,
> >  	return false;
> >  }
> > 
> > +static bool errata_probe_qspinlock(unsigned int stage,
> > +				   unsigned long arch_id, unsigned long impid)
> > +{
> > +	if (!IS_ENABLED(CONFIG_ERRATA_THEAD_QSPINLOCK))
> > +		return false;
> > +
> > +	/*
> > +	 * The queued_spinlock torture would get in livelock without
> > +	 * ERRATA_THEAD_WRITE_ONCE fixup for the early versions of T-HEAD
> > +	 * processors.
> > +	 */
> > +	if (arch_id == 0 && impid == 0 &&
> > +	    !IS_ENABLED(CONFIG_ERRATA_THEAD_WRITE_ONCE))
> > +		return false;
> > +
> > +	if (stage == RISCV_ALTERNATIVES_EARLY_BOOT)
> > +		return true;
> > +
> > +	return false;
> > +}
> > +
> >  static u32 thead_errata_probe(unsigned int stage,
> >  			      unsigned long archid, unsigned long impid)
> >  {
> > @@ -103,6 +124,9 @@ static u32 thead_errata_probe(unsigned int stage,
> >  	if (errata_probe_write_once(stage, archid, impid))
> >  		cpu_req_errata |= BIT(ERRATA_THEAD_WRITE_ONCE);
> > 
> > +	if (errata_probe_qspinlock(stage, archid, impid))
> > +		cpu_req_errata |= BIT(ERRATA_THEAD_QSPINLOCK);
> > +
> >  	return cpu_req_errata;
> >  }
> > 
> > diff --git a/arch/riscv/include/asm/errata_list.h 
> > b/arch/riscv/include/asm/errata_list.h
> > index fbb2b8d39321..a696d18d1b0d 100644
> > --- a/arch/riscv/include/asm/errata_list.h
> > +++ b/arch/riscv/include/asm/errata_list.h
> > @@ -141,6 +141,26 @@ asm volatile(ALTERNATIVE(						\
> >  	: "=r" (__ovl) :						\
> >  	: "memory")
> > 
> > +static __always_inline bool
> > +riscv_has_errata_thead_qspinlock(void)
> > +{
> > +	if (IS_ENABLED(CONFIG_RISCV_ALTERNATIVE)) {
> > +		asm_volatile_goto(
> > +		ALTERNATIVE(
> > +		"j	%l[l_no]", "nop",
> > +		THEAD_VENDOR_ID,
> > +		ERRATA_THEAD_QSPINLOCK,
> > +		CONFIG_ERRATA_THEAD_QSPINLOCK)
> > +		: : : : l_no);
> > +	} else {
> > +		goto l_no;
> > +	}
> > +
> > +	return true;
> > +l_no:
> > +	return false;
> > +}
> > +
> >  #endif /* __ASSEMBLY__ */
> > 
> >  #endif
> > diff --git a/arch/riscv/include/asm/vendorid_list.h 
> > b/arch/riscv/include/asm/vendorid_list.h
> > index 73078cfe4029..1f1d03877f5f 100644
> > --- a/arch/riscv/include/asm/vendorid_list.h
> > +++ b/arch/riscv/include/asm/vendorid_list.h
> > @@ -19,7 +19,8 @@
> >  #define	ERRATA_THEAD_CMO 1
> >  #define	ERRATA_THEAD_PMU 2
> >  #define	ERRATA_THEAD_WRITE_ONCE 3
> > -#define	ERRATA_THEAD_NUMBER 4
> > +#define	ERRATA_THEAD_QSPINLOCK 4
> > +#define	ERRATA_THEAD_NUMBER 5
> >  #endif
> > 
> >  #endif
> > diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
> > index f8dbbe1bbd34..d9694fe40a9a 100644
> > --- a/arch/riscv/kernel/cpufeature.c
> > +++ b/arch/riscv/kernel/cpufeature.c
> > @@ -342,7 +342,8 @@ void __init riscv_fill_hwcap(void)
> >  		 * spinlock value, the only way is to change from queued_spinlock to
> >  		 * ticket_spinlock, but can not be vice.
> >  		 */
> > -		if (!force_qspinlock) {
> > +		if (!force_qspinlock &&
> > +		    !riscv_has_errata_thead_qspinlock()) {
> >  			set_bit(RISCV_ISA_EXT_XTICKETLOCK, isainfo->isa);
> >  		}
> >  #endif
> > -- 
> > 2.36.1
> >
> >
> > _______________________________________________
> > linux-riscv mailing list
> > linux-riscv at lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-riscv



More information about the linux-riscv mailing list