[PATCH 17/19] srcu: Optimize SRCU-fast-updown for arm64

Will Deacon will at kernel.org
Mon Nov 3 04:51:48 PST 2025


Hi Paul,

On Sun, Nov 02, 2025 at 01:44:34PM -0800, Paul E. McKenney wrote:
> Some arm64 platforms have slow per-CPU atomic operations, for example,
> the Neoverse V2.  This commit therefore moves SRCU-fast from per-CPU
> atomic operations to interrupt-disabled non-read-modify-write-atomic
> atomic_read()/atomic_set() operations.  This works because
> SRCU-fast-updown is not invoked from read-side primitives, which
> means that if srcu_read_unlock_fast() NMI handlers.  This means that
> srcu_read_lock_fast_updown() and srcu_read_unlock_fast_updown() can
> exclude themselves and each other
> 
> This reduces the overhead of calls to srcu_read_lock_fast_updown() and
> srcu_read_unlock_fast_updown() from about 100ns to about 12ns on an ARM
> Neoverse V2.  Although this is not excellent compared to about 2ns on x86,
> it sure beats 100ns.
> 
> This command was used to measure the overhead:
> 
> tools/testing/selftests/rcutorture/bin/kvm.sh --torture refscale --allcpus --duration 5 --configs NOPREEMPT --kconfig "CONFIG_NR_CPUS=64 CONFIG_TASKS_TRACE_RCU=y" --bootargs "refscale.loops=100000 refscale.guest_os_delay=5 refscale.nreaders=64 refscale.holdoff=30 torture.disable_onoff_at_boot refscale.scale_type=srcu-fast-updown refscale.verbose_batched=8 torture.verbose_sleep_frequency=8 torture.verbose_sleep_duration=8 refscale.nruns=100" --trust-make
> 
> Signed-off-by: Paul E. McKenney <paulmck at kernel.org>
> Cc: Catalin Marinas <catalin.marinas at arm.com>
> Cc: Will Deacon <will at kernel.org>
> Cc: Mark Rutland <mark.rutland at arm.com>
> Cc: Mathieu Desnoyers <mathieu.desnoyers at efficios.com>
> Cc: Steven Rostedt <rostedt at goodmis.org>
> Cc: Sebastian Andrzej Siewior <bigeasy at linutronix.de>
> Cc: <linux-arm-kernel at lists.infradead.org>
> Cc: <bpf at vger.kernel.org>
> ---
>  include/linux/srcutree.h | 56 ++++++++++++++++++++++++++++++++++++----
>  1 file changed, 51 insertions(+), 5 deletions(-)

[...]

> @@ -327,12 +355,23 @@ __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
>  static inline
>  struct srcu_ctr __percpu notrace *__srcu_read_lock_fast_updown(struct srcu_struct *ssp)
>  {
> -	struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp);
> +	struct srcu_ctr __percpu *scp;
>  
> -	if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
> +	if (IS_ENABLED(CONFIG_ARM64) && IS_ENABLED(CONFIG_ARM64_USE_LSE_PERCPU_ATOMICS)) {
> +		unsigned long flags;
> +
> +		local_irq_save(flags);
> +		scp = __srcu_read_lock_fast_na(ssp);
> +		local_irq_restore(flags); /* Avoids leaking the critical section. */
> +		return scp;
> +	}

Do we still need to pursue this after Catalin's prefetch suggestion for the
per-cpu atomics?

https://lore.kernel.org/r/aQU7l-qMKJTx4znJ@arm.com

Although disabling/enabling interrupts on your system seems to be
significantly faster than an atomic instruction, I'm worried that it's
all very SoC-specific and on a mobile part (especially with pseudo-NMI),
the relative costs could easily be the other way around.

Will



More information about the linux-arm-kernel mailing list