Overhead of arm64 LSE per-CPU atomics?

Breno Leitao leitao at debian.org
Tue Nov 4 07:59:38 PST 2025


Hello Catalin,

On Fri, Oct 31, 2025 at 06:30:31PM +0000, Catalin Marinas wrote:
> On Thu, Oct 30, 2025 at 03:37:00PM -0700, Paul E. McKenney wrote:
> > To make event tracing safe for PREEMPT_RT kernels, I have been creating
> > optimized variants of SRCU readers that use per-CPU atomics.  This works
> > quite well, but on ARM Neoverse V2, I am seeing about 100ns for a
> > srcu_read_lock()/srcu_read_unlock() pair, or about 50ns for a single
> > per-CPU atomic operation.  This contrasts with a handful of nanoseconds
> > on x86 and similar on ARM for a atomic_set(&foo, atomic_read(&foo) + 1).
> 
> That's quite a difference. Does it get any better if
> CONFIG_ARM64_LSE_ATOMICS is disabled? We don't have a way to disable it
> on the kernel command line.
> 
> Depending on the implementation and configuration, the LSE atomics may
> skip the L1 cache and be executed closer to the memory (they used to be
> called far atomics). The CPUs try to be smarter like doing the operation
> "near" if it's in the cache but the heuristics may not always work.

I am trying to play with LSE latency and compare it with LL/SC usecase. I
_think_ I have a reproducer in userspace

I've create a simple userspace program to compare the latency of a atomic add
using LL/SC and LSE, basically comparing the following two functions while
executing without any contention (single thread doing the atomic operation -
no atomic contention):

	static inline void __percpu_add_case_64_llsc(void *ptr, unsigned long val)
	{
		asm volatile(
			/* LL/SC */
			"1:  ldxr    %[tmp], %[ptr]\n"
			"    add     %[tmp], %[tmp], %[val]\n"
			"    stxr    %w[loop], %[tmp], %[ptr]\n"
			"    cbnz    %w[loop], 1b"
			: [loop] "=&r"(loop), [tmp] "=&r"(tmp), [ptr] "+Q"(*(u64 *)ptr)
			: [val] "r"((u64)(val))
			: "memory");
	}

and

	/* LSE implementation */
	static inline void __percpu_add_case_64_lse(void *ptr, unsigned long val)
	{
		asm volatile(
			/* LSE atomics */
			"    stadd    %[val], %[ptr]\n"
			: [ptr] "+Q"(*(u64 *)ptr)
			: [val] "r"((u64)(val))
			: "memory");
	}

I found that the LSE case (__percpu_add_case_64_lse) has a huge variation,
while LL/SC case is stable.
In some case, LSE function runs at the same latency as LL/SC function and
slightly faster on p50, but, something happen to the system and LSE operations
start to take way longer than LL/SC.

Here are some interesting output coming from the latency of the functions above>

	CPU: 47 - Latency Percentiles:
	====================
	LL/SC:   p50: 5.69 ns      p95: 5.71 ns      p99: 5.80 ns
	LSE  :   p50: 45.53 ns     p95: 54.06 ns     p99: 55.18 ns

	CPU: 48 - Latency Percentiles:
	====================
	LL/SC:   p50: 5.70 ns      p95: 5.72 ns      p99: 6.10 ns
	LSE  :   p50: 4.02 ns      p95: 45.55 ns     p99: 54.93 ns

	CPU: 49 - Latency Percentiles:
	====================
	LL/SC:   p50: 5.74 ns      p95: 5.75 ns      p99: 5.78 ns
	LSE  :   p50: 4.04 ns      p95: 50.32 ns     p99: 53.04 ns


At this stage, it is unclear what is causing these variations.

The code above could be run with:

 # git clone https://github.com/leitao/debug.git
 # cd debug/LSE
 # make && ./percpu_bench



More information about the linux-arm-kernel mailing list