[PATCH v4 3/4] locking/qspinlock: Add ARCH_USE_QUEUED_SPINLOCKS_XCHG32
Peter Zijlstra
peterz at infradead.org
Mon Mar 29 12:16:52 BST 2021
On Mon, Mar 29, 2021 at 11:41:19AM +0200, Arnd Bergmann wrote:
> On Mon, Mar 29, 2021 at 9:52 AM Peter Zijlstra <peterz at infradead.org> wrote:
> > On Sat, Mar 27, 2021 at 06:06:38PM +0000, guoren at kernel.org wrote:
> > > From: Guo Ren <guoren at linux.alibaba.com>
> > >
> > > Some architectures don't have sub-word swap atomic instruction,
> > > they only have the full word's one.
> > >
> > > The sub-word swap only improve the performance when:
> > > NR_CPUS < 16K
> > > * 0- 7: locked byte
> > > * 8: pending
> > > * 9-15: not used
> > > * 16-17: tail index
> > > * 18-31: tail cpu (+1)
> > >
> > > The 9-15 bits are wasted to use xchg16 in xchg_tail.
> > >
> > > Please let architecture select xchg16/xchg32 to implement
> > > xchg_tail.
> >
> > So I really don't like this, this pushes complexity into the generic
> > code for something that's really not needed.
> >
> > Lots of RISC already implement sub-word atomics using word ll/sc.
> > Obviously they're not sharing code like they should be :/ See for
> > example arch/mips/kernel/cmpxchg.c.
>
> That is what the previous version of the patch set did, right?
>
> I think this v4 is nicer because the code is already there in
> qspinlock.c and just gets moved around, and the implementation
> is likely more efficient this way. The mips version could be made
> more generic, but it is also less efficient than a simple xchg
> since it requires an indirect function call plus nesting a pair of
> loops instead in place of the single single ll/sc loop in the 32-bit
> xchg.
>
> I think the weakly typed xchg/cmpxchg implementation causes
> more problems than it solves, and we'd be better off using
> a stronger version in general, with the 8-bit and 16-bit exchanges
> using separate helpers in the same way that the fixed-length
> cmpxchg64 is separate already, there are only a couple of instances
> for each of these in the kernel.
>
> Unfortunately, there is roughly a 50:50 split between fixed 32-bit
> and long/pointer-sized xchg/cmpxchg users in the kernel, so
> making the interface completely fixed-type would add a ton of
> churn. I created an experimental patch for this, but it's probably
> not worth it.
The mips code is pretty horrible. Using a cmpxchg loop on an ll/sc arch
is jus daft. And that's exactly what the generic xchg_tail() thing does
too.
A single LL/SC loop that sets either the upper or lower 16 bits of the
word is always better.
Anyway, an additional 'funny' is that I suspect you cannot prove fwd
progress of the entire primitive with any of this on. But who cares
about details anyway.. :/
And the whole WFE optimization that was relevant for the ticket lock, is
_still_ relevant for qspinlock, except that seems to have gone missing
again.
I just don't have much confidence here that people actually understand
what they're doing or why.
More information about the linux-riscv
mailing list