[PATCH] arm64: remove HAVE_CMPXCHG_LOCAL
Catalin Marinas
catalin.marinas at arm.com
Tue Feb 17 08:48:49 PST 2026
On Tue, Feb 17, 2026 at 03:00:22PM +0000, Will Deacon wrote:
> On Tue, Feb 17, 2026 at 01:53:19PM +0000, Catalin Marinas wrote:
> > On Mon, Feb 16, 2026 at 08:59:17PM +0530, Dev Jain wrote:
> > > On 16/02/26 4:30 pm, Will Deacon wrote:
> > > > On Sun, Feb 15, 2026 at 11:39:44AM +0800, Jisheng Zhang wrote:
> > > >> It turns out the generic disable/enable irq this_cpu_cmpxchg
> > > >> implementation is faster than LL/SC or lse implementation. Remove
> > > >> HAVE_CMPXCHG_LOCAL for better performance on arm64.
> > > >>
> > > >> Tested on Quad 1.9GHZ CA55 platform:
> > > >> average mod_node_page_state() cost decreases from 167ns to 103ns
> > > >> the spawn (30 duration) benchmark in unixbench is improved
> > > >> from 147494 lps to 150561 lps, improved by 2.1%
> > > >>
> > > >> Tested on Quad 2.1GHZ CA73 platform:
> > > >> average mod_node_page_state() cost decreases from 113ns to 85ns
> > > >> the spawn (30 duration) benchmark in unixbench is improved
> > > >> from 209844 lps to 212581 lps, improved by 1.3%
[...]
> > > > That is _entirely_ dependent on the system, so this isn't the right
> > > > approach. I also don't think it's something we particularly want to
> > > > micro-optimise to accomodate systems that suck at atomics.
> > >
> > > As I mention in the other email, the suspect is not the atomics, but
> > > preempt_disable(). On Apple M3, the regression reported in [1] resolves
> > > by removing preempt_disable/enable in _pcp_protect_return. To prove
> > > this another way, I disabled CONFIG_ARM64_HAS_LSE_ATOMICS and the
> > > regression worsened, indicating that at least on Apple M3 the
> > > atomics are faster.
> >
> > Then why don't we replace the preempt disabling with local_irq_save()
> > in the arm64 code and still use the LSE atomics?
>
> Even better, work on making preempt_disable() faster as it's used in many
> other places.
Yes, that would be good. It's the preempt_enable_notrace() path that
ends up calling preempt_schedule_notrace() -> __schedule() pretty much
unconditionally. Not sure what would go wrong but some simple change
like this (can be done at a higher in the preempt macros to even avoid
getting here):
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 854984967fe2..d9a5d6438303 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7119,7 +7119,7 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
if (likely(!preemptible()))
return;
- do {
+ while (need_resched()) {
/*
* Because the function tracer can trace preempt_count_sub()
* and it also uses preempt_enable/disable_notrace(), if
@@ -7146,7 +7146,7 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
preempt_latency_stop(1);
preempt_enable_no_resched_notrace();
- } while (need_resched());
+ }
}
EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
Of course, changing the preemption model solves this by making the
macros no-ops but I assume people want to keep preemption on.
--
Catalin
More information about the linux-arm-kernel
mailing list