[PATCH v3 2/5] asm-generic: barrier: Handle spin-wait in smp_cond_load_relaxed_timewait()

Ankur Arora ankur.a.arora at oracle.com
Thu Jun 26 21:48:02 PDT 2025


smp_cond_load_relaxed_timewait() waits on a conditional variable,
while watching the clock.

The generic code presents the simple case where the waiting is done
via a cpu_relax() spin-wait loop. To keep the pipeline as idle as
possible, we want to do the relatively expensive time check only
intermittently.

Add ___smp_cond_spinwait() which handles adjustments to the spin-count
based on the deadline.

Cc: Arnd Bergmann <arnd at arndb.de>
Cc: Will Deacon <will at kernel.org>
Cc: Catalin Marinas <catalin.marinas at arm.com>
Cc: Peter Zijlstra <peterz at infradead.org>
Cc: linux-arch at vger.kernel.org
Signed-off-by: Ankur Arora <ankur.a.arora at oracle.com>
---
 include/asm-generic/barrier.h | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index d33c2701c9ee..8299c57d1110 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -15,6 +15,7 @@
 
 #include <linux/compiler.h>
 #include <linux/kcsan-checks.h>
+#include <linux/minmax.h>
 #include <asm/rwonce.h>
 
 #ifndef nop
@@ -286,11 +287,30 @@ do {									\
 static inline u64 ___smp_cond_spinwait(u64 now, u64 prev, u64 end,
 				       u32 *spin, bool *wait, u64 slack)
 {
+	u64 time_check;
+	u64 remaining = end - now;
+
 	if (now >= end)
 		return 0;
-
-	*spin = SMP_TIMEWAIT_SPIN_BASE;
+	/*
+	 * Use a floor spin-count as it might be artificially low if we are
+	 * transitioning from wait to spin, or because we got interrupted.
+	 */
+	*spin = min(*spin, SMP_TIMEWAIT_SPIN_BASE);
 	*wait = false;
+
+	/*
+	 * We will map the time_check interval to the spin-count by scaling
+	 * based on the previous time-check interval. This is imprecise, so
+	 * use a safety margin.
+	 */
+	time_check = min(remaining/4, 1UL);
+
+	if ((now - prev) < time_check)
+		*spin <<= 1;
+	else
+		*spin = ((*spin >> 1) + (*spin >> 2));
+
 	return now;
 }
 
-- 
2.43.5




More information about the linux-arm-kernel mailing list