[RFC][PATCH] locking: Generic ticket-lock

Peter Zijlstra peterz at infradead.org
Thu Apr 15 09:09:54 BST 2021


On Thu, Apr 15, 2021 at 05:47:34AM +0900, Stafford Horne wrote:

> > How's this then? Compile tested only on openrisc/simple_smp_defconfig.
> 
> I did my testing with this FPGA build SoC:
> 
>  https://github.com/stffrdhrn/de0_nano-multicore
> 
> Note, the CPU timer sync logic uses mb() and is a bit flaky.  So missing mb()
> might be a reason.  I thought we had defined mb() and l.msync, but it seems to
> have gotten lost.
> 
> With that said I could test out this ticket-lock implementation.  How would I
> tell if its better than qspinlock?

Mostly if it isn't worse, it's better for being *much* simpler. As you
can see, the guts of ticket is like 16 lines of C (lock+unlock) and you
only need the behaviour of atomic_fetch_add() to reason about behaviour
of the whole thing. qspinlock OTOH is mind bending painful to reason
about.

There are some spinlock tests in locktorture; but back when I had a
userspace copy of the lot and would measure min,avg,max acquire times
under various contention loads (making sure to only run a single task
per CPU etc.. to avoid lock holder preemption and other such 'fun'
things).

It took us a fair amount of work to get qspinlock to compete with ticket
for low contention cases (by far the most common in the kernel), and it
took a fairly large amount of CPUs for qspinlock to really win from
ticket on the contended case. Your hardware may vary. In particular the
access to the external cacheline (for queueing, see the queue: label in
queued_spin_lock_slowpath) is a pain-point and the relative cost of
cacheline misses for your arch determines where (and if) low contention
behaviour is competitive.

Also, less variance (the reason for the min/max measure) is better.
Large variance is typically a sign of fwd progress trouble.

That's not saying that qspinlock isn't awesome, but I'm arguing that you
should get there by first trying all the simpler things. By gradually
increasing complexity you can also find the problem spots (for your
architecture) and you have something to fall back to in case of trouble.

Now, the obvious selling point of qspinlock is that due to the MCS style
nature of the thing it doesn't bounce the lock around, but that comes at
a cost of having to use that extra cacheline (due to the kernel liking
sizeof(spinlock_t) == sizeof(u32)). But things like ARM64's WFE (see
smp_cond_load_acquire()) can shift the balance quite a bit on that front
as well (ARM has a similar thing but less useful, see it's spinlock.h
and look for wfe() and dsb_sev()).

Once your arch hits NUMA, qspinlock is probably a win. However, low
contention performance is still king for most workloads. Better high
contention behaviour is nice.



More information about the linux-riscv mailing list