[PATCH v4 3/4] locking/qspinlock: Add ARCH_USE_QUEUED_SPINLOCKS_XCHG32

Guo Ren guoren at kernel.org
Mon Mar 29 12:19:29 BST 2021


On Mon, Mar 29, 2021 at 3:50 PM Peter Zijlstra <peterz at infradead.org> wrote:
>
> On Sat, Mar 27, 2021 at 06:06:38PM +0000, guoren at kernel.org wrote:
> > From: Guo Ren <guoren at linux.alibaba.com>
> >
> > Some architectures don't have sub-word swap atomic instruction,
> > they only have the full word's one.
> >
> > The sub-word swap only improve the performance when:
> > NR_CPUS < 16K
> >  *  0- 7: locked byte
> >  *     8: pending
> >  *  9-15: not used
> >  * 16-17: tail index
> >  * 18-31: tail cpu (+1)
> >
> > The 9-15 bits are wasted to use xchg16 in xchg_tail.
> >
> > Please let architecture select xchg16/xchg32 to implement
> > xchg_tail.
>
> So I really don't like this, this pushes complexity into the generic
> code for something that's really not needed.
>
> Lots of RISC already implement sub-word atomics using word ll/sc.
> Obviously they're not sharing code like they should be :/ See for
> example arch/mips/kernel/cmpxchg.c.
I see, we've done two versions of this:
 - Using cmpxchg codes from MIPS by Michael
 - Re-write with assembly codes by Guo

But using the full-word atomic xchg instructions implement xchg16 has
the semantic risk for atomic operations.

I don't think export xchg16 in a none-sub-word atomic machine is correct.

>
> Also, I really do think doing ticket locks first is a far more sensible
> step.
NACK by Anup

--
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/



More information about the linux-riscv mailing list