Overhead of arm64 LSE per-CPU atomics?

Paul E. McKenney paulmck at kernel.org
Fri Oct 31 20:25:07 PDT 2025


On Fri, Oct 31, 2025 at 04:38:57PM -0700, Paul E. McKenney wrote:
> On Fri, Oct 31, 2025 at 10:43:35PM +0000, Catalin Marinas wrote:
> > On Fri, Oct 31, 2025 at 12:39:41PM -0700, Paul E. McKenney wrote:
> > > On Fri, Oct 31, 2025 at 06:30:31PM +0000, Catalin Marinas wrote:
> > > > On Thu, Oct 30, 2025 at 03:37:00PM -0700, Paul E. McKenney wrote:
> > > > > To make event tracing safe for PREEMPT_RT kernels, I have been creating
> > > > > optimized variants of SRCU readers that use per-CPU atomics.  This works
> > > > > quite well, but on ARM Neoverse V2, I am seeing about 100ns for a
> > > > > srcu_read_lock()/srcu_read_unlock() pair, or about 50ns for a single
> > > > > per-CPU atomic operation.  This contrasts with a handful of nanoseconds
> > > > > on x86 and similar on ARM for a atomic_set(&foo, atomic_read(&foo) + 1).
> > > > 
> > > > That's quite a difference. Does it get any better if
> > > > CONFIG_ARM64_LSE_ATOMICS is disabled? We don't have a way to disable it
> > > > on the kernel command line.
> > > 
> > > In other words, build with CONFIG_ARM64_USE_LSE_ATOMICS=n, correct?
> > 
> > Yes.
> > 
> > > Yes, this gets me more than an order of magnitude, and about 30% better
> > > than my workaround of disabling interrupts around a non-atomic increment
> > > of those counters, thank you!
> > > 
> > > Given that per-CPU atomics are usually not heavily contended, would it
> > > make sense to avoid LSE in that case?
> > 
> > In theory the LSE atomics should be as fast but microarchitecture
> > decisions likely did not cover all the use-cases. I'll raise this
> > internally as well, maybe we get some ideas from the hardware people.
> 
> Understood, and please let me know what you can from the hardware people.
> 
> > > And I need to figure out whether I should recommend that Meta build
> > > its arm64 kernels with CONFIG_ARM64_USE_LSE_ATOMICS=n.  And advice you
> > > might have would be deeply appreciated!  (I am of course also following
> > > up internally.)
> > 
> > I wouldn't advise turning them off just yet, they are beneficial for
> > other use-cases. But it needs more thinking (and not that late at night ;)).
> 
> Fair enough!
> 
> > > > Interestingly, we had this patch recently to force a prefetch before the
> > > > atomic:
> > > > 
> > > > https://lore.kernel.org/all/20250724120651.27983-1-yangyicong@huawei.com/
> > > > 
> > > > We rejected it but I wonder whether it improves the SRCU scenario.
> > > 
> > > No statistical difference on my system.  This is a 72-CPU Neoverse V2, in
> > > case that matters.
> > 
> > I just realised that patch doesn't touch percpu.h at all. So what about
> > something like (untested):
> > 
> > -----------------8<------------------------
> > diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
> > index 9abcc8ef3087..e381034324e1 100644
> > --- a/arch/arm64/include/asm/percpu.h
> > +++ b/arch/arm64/include/asm/percpu.h
> > @@ -70,6 +70,7 @@ __percpu_##name##_case_##sz(void *ptr, unsigned long val)		\
> >  	unsigned int loop;						\
> >  	u##sz tmp;							\
> >  									\
> > +	asm volatile("prfm pstl1strm, %a0\n" : : "p" (ptr));
> >  	asm volatile (ARM64_LSE_ATOMIC_INSN(				\
> >  	/* LL/SC */							\
> >  	"1:	ldxr" #sfx "\t%" #w "[tmp], %[ptr]\n"			\
> > @@ -91,6 +92,7 @@ __percpu_##name##_return_case_##sz(void *ptr, unsigned long val)	\
> >  	unsigned int loop;						\
> >  	u##sz ret;							\
> >  									\
> > +	asm volatile("prfm pstl1strm, %a0\n" : : "p" (ptr));
> >  	asm volatile (ARM64_LSE_ATOMIC_INSN(				\
> >  	/* LL/SC */							\
> >  	"1:	ldxr" #sfx "\t%" #w "[ret], %[ptr]\n"			\
> > -----------------8<------------------------
> 
> I will give this a shot, thank you!

Jackpot!!!

This reduces the overhead to 8.427, which is significantly better than
the non-LSE value of 9.853.  Still room for improvement, but much
better than the 100ns values.

I presume that you will send this up the normal path, but in the meantime,
I will pull this in for further local testing, and thank you!

							Thanx, Paul

> > > Here are my results for the underlying this_cpu_inc()
> > > and this_cpu_dec() pair of operations:
> > > 
> > > 	LSE Atomics Enabled (Stock)	LSE Atomics Disabled
> > > 
> > > Without Yicong’s Patch (Stock)
> > > 
> > > 			    110.786		       9.852
> > > 
> > > With Yicong’s Patch
> > > 
> > > 			    109.873		       9.853
> > > 
> > > As you can see, disabling LSE gets about an order of magnitude
> > > and Yicong's patch has no statistically significant effect.
> > > 
> > > This and more can be found in the "Per-CPU Increment/Decrement"
> > > section of this Google document:
> > > 
> > > https://docs.google.com/document/d/1RoYRrTsabdeTXcldzpoMnpmmCjGbJNWtDXN6ZNr_4H8/edit?usp=sharing
> > > 
> > > Full disclosure: Calls to srcu_read_lock_fast() followed by
> > > srcu_read_unlock_fast() really use one this_cpu_inc() followed by another
> > > this_cpu_inc(), but I am not seeing any difference between the two.
> > > And testing the underlying primitives allows my tests to give reproducible
> > > results regardless of what state I have the SRCU code in.  ;-)
> > 
> > Thanks. I'll go through your emails in more detail tomorrow/Monday.
> 
> Thank you!  Not violently urgent, but I do look forward to hearing what
> you come up with.  In the meantime, I am testing with the patch I sent
> and will let you know if problems arise.  So far, so good...
> 
> 							Thanx, Paul



More information about the linux-arm-kernel mailing list