Overhead of arm64 LSE per-CPU atomics?

Catalin Marinas catalin.marinas at arm.com
Tue Nov 4 09:06:14 PST 2025


Hi Breno,

On Tue, Nov 04, 2025 at 07:59:38AM -0800, Breno Leitao wrote:
> On Fri, Oct 31, 2025 at 06:30:31PM +0000, Catalin Marinas wrote:
> > On Thu, Oct 30, 2025 at 03:37:00PM -0700, Paul E. McKenney wrote:
> > > To make event tracing safe for PREEMPT_RT kernels, I have been creating
> > > optimized variants of SRCU readers that use per-CPU atomics.  This works
> > > quite well, but on ARM Neoverse V2, I am seeing about 100ns for a
> > > srcu_read_lock()/srcu_read_unlock() pair, or about 50ns for a single
> > > per-CPU atomic operation.  This contrasts with a handful of nanoseconds
> > > on x86 and similar on ARM for a atomic_set(&foo, atomic_read(&foo) + 1).
> > 
> > That's quite a difference. Does it get any better if
> > CONFIG_ARM64_LSE_ATOMICS is disabled? We don't have a way to disable it
> > on the kernel command line.
> > 
> > Depending on the implementation and configuration, the LSE atomics may
> > skip the L1 cache and be executed closer to the memory (they used to be
> > called far atomics). The CPUs try to be smarter like doing the operation
> > "near" if it's in the cache but the heuristics may not always work.
> 
> I am trying to play with LSE latency and compare it with LL/SC usecase. I
> _think_ I have a reproducer in userspace
> 
> I've create a simple userspace program to compare the latency of a atomic add
> using LL/SC and LSE, basically comparing the following two functions while
> executing without any contention (single thread doing the atomic operation -
> no atomic contention):
> 
> 	static inline void __percpu_add_case_64_llsc(void *ptr, unsigned long val)
> 	{
> 		asm volatile(
> 			/* LL/SC */
> 			"1:  ldxr    %[tmp], %[ptr]\n"
> 			"    add     %[tmp], %[tmp], %[val]\n"
> 			"    stxr    %w[loop], %[tmp], %[ptr]\n"
> 			"    cbnz    %w[loop], 1b"
> 			: [loop] "=&r"(loop), [tmp] "=&r"(tmp), [ptr] "+Q"(*(u64 *)ptr)
> 			: [val] "r"((u64)(val))
> 			: "memory");
> 	}
> 
> and
> 
> 	/* LSE implementation */
> 	static inline void __percpu_add_case_64_lse(void *ptr, unsigned long val)
> 	{
> 		asm volatile(
> 			/* LSE atomics */
> 			"    stadd    %[val], %[ptr]\n"
> 			: [ptr] "+Q"(*(u64 *)ptr)
> 			: [val] "r"((u64)(val))
> 			: "memory");
> 	}

Could you try with an ldadd instead? See my reply to Paul a few minutes
ago.

Thanks.

-- 
Catalin



More information about the linux-arm-kernel mailing list