Overhead of arm64 LSE per-CPU atomics?
Willy Tarreau
w at 1wt.eu
Mon Nov 3 13:56:05 PST 2025
On Mon, Nov 03, 2025 at 09:49:56PM +0000, Catalin Marinas wrote:
> On Fri, Oct 31, 2025 at 08:25:07PM -0700, Paul E. McKenney wrote:
> > On Fri, Oct 31, 2025 at 04:38:57PM -0700, Paul E. McKenney wrote:
> > > On Fri, Oct 31, 2025 at 10:43:35PM +0000, Catalin Marinas wrote:
> > > > diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
> > > > index 9abcc8ef3087..e381034324e1 100644
> > > > --- a/arch/arm64/include/asm/percpu.h
> > > > +++ b/arch/arm64/include/asm/percpu.h
> > > > @@ -70,6 +70,7 @@ __percpu_##name##_case_##sz(void *ptr, unsigned long val) \
> > > > unsigned int loop; \
> > > > u##sz tmp; \
> > > > \
> > > > + asm volatile("prfm pstl1strm, %a0\n" : : "p" (ptr));
> > > > asm volatile (ARM64_LSE_ATOMIC_INSN( \
> > > > /* LL/SC */ \
> > > > "1: ldxr" #sfx "\t%" #w "[tmp], %[ptr]\n" \
> > > > @@ -91,6 +92,7 @@ __percpu_##name##_return_case_##sz(void *ptr, unsigned long val) \
> > > > unsigned int loop; \
> > > > u##sz ret; \
> > > > \
> > > > + asm volatile("prfm pstl1strm, %a0\n" : : "p" (ptr));
> > > > asm volatile (ARM64_LSE_ATOMIC_INSN( \
> > > > /* LL/SC */ \
> > > > "1: ldxr" #sfx "\t%" #w "[ret], %[ptr]\n" \
> > > > -----------------8<------------------------
> > >
> > > I will give this a shot, thank you!
> >
> > Jackpot!!!
> >
> > This reduces the overhead to 8.427, which is significantly better than
> > the non-LSE value of 9.853. Still room for improvement, but much
> > better than the 100ns values.
>
> Just curious, if you have time, could you try prefetchw() instead of the
> above asm? That would be a PRFM PSTL1KEEP instead of STRM. Are
> __srcu_read_lock() and __srcu_read_unlock() usually touching the same
> cache line?
FWIW I tested PRFM PSTL1KEEP this morning on the Altra with haproxy just
out of curiosity and didn't notice a difference with PRFM PSTL1STRM. Maybe
in the kernel it will be different though.
Willy
More information about the linux-arm-kernel
mailing list