[PATCH] riscv: locks: introduce ticket-based spinlock implementation
Peter Zijlstra
peterz at infradead.org
Wed Apr 14 08:08:18 BST 2021
On Wed, Apr 14, 2021 at 10:26:57AM +0800, Guo Ren wrote:
> Thx Peter,
>
> On Tue, Apr 13, 2021 at 4:17 PM Peter Zijlstra <peterz at infradead.org> wrote:
> >
> > On Tue, Apr 13, 2021 at 10:03:01AM +0200, Peter Zijlstra wrote:
> >
> > > For ticket locks you really only needs atomic_fetch_add() and
> > > smp_store_release() and an architectural guarantees that the
> > > atomic_fetch_add() has fwd progress under contention and that a sub-word
> > > store (through smp_store_release()) will fail the SC.
> > >
> > > Then you can do something like:
> > >
> > > void lock(atomic_t *lock)
> > > {
> > > u32 val = atomic_fetch_add(1<<16, lock); /* SC, gives us RCsc */
> > > u16 ticket = val >> 16;
> > >
> > > for (;;) {
> > > if (ticket == (u16)val)
> > > break;
> > > cpu_relax();
> > > val = atomic_read_acquire(lock);
> > > }
> Should it be?
> for (;;) {
> if (ticket == (u16)val) {
> __atomic_acquire_fence();
> break;
> }
No, atomic_fetch_add() is full smp_mb(), it even has a comment on that
says so.
Also, __atomic_acquire_fence() is an implementation detail of atomic,
and architectures need not provide it. On top of that, IIRC the atomic
_acquire/_release have RCpc ordering, where we want our locks to have
RCsc ordering (and very much not weaker than RCtso). Even more so,
adding barriers to atomics should really not be conditional.
More information about the linux-riscv
mailing list