[PATCH v5 1/5] asm-generic: barrier: Add smp_cond_load_relaxed_timeout()

Will Deacon will at kernel.org
Mon Sep 22 03:47:06 PDT 2025


On Fri, Sep 19, 2025 at 04:41:56PM -0700, Ankur Arora wrote:
> Will Deacon <will at kernel.org> writes:
> > On Wed, Sep 10, 2025 at 08:46:51PM -0700, Ankur Arora wrote:
> >> +	for (;;) {							\
> >> +		VAL = READ_ONCE(*__PTR);				\
> >> +		if (cond_expr)						\
> >> +			break;						\
> >> +		cpu_relax();						\
> >> +		if (++__n < __spin)					\
> >> +			continue;					\
> >> +		if (time_check_expr)					\
> >> +			break;						\
> >
> > There's a funny discrepancy here when compared to the arm64 version in
> > the next patch. Here, if we time out, then the value returned is
> > potentially quite stale because it was read before the last cpu_relax().
> > In the arm64 patch, the timeout check is before the cmpwait/cpu_relax(),
> > which I think is better.
> 
> So, that's a good point. But, the return value being stale also seems to
> be incorrect.
> 
> > Regardless, I think having the same behaviour for the two implementations
> > would be a good idea.
> 
> Yeah agreed.
> 
> As you outlined in the other mail, how about something like this:
> 
> #ifndef smp_cond_load_relaxed_timeout
> #define smp_cond_load_relaxed_timeout(ptr, cond_expr, time_check_expr)	\
> ({									\
> 	typeof(ptr) __PTR = (ptr);					\
> 	__unqual_scalar_typeof(*ptr) VAL;				\
> 	u32 __n = 0, __poll = SMP_TIMEOUT_POLL_COUNT;			\
> 									\
> 	for (;;) {							\
> 		VAL = READ_ONCE(*__PTR);				\
> 		if (cond_expr)						\
> 			break;						\
> 		cpu_poll_relax();					\
> 		if (++__n < __poll)					\
> 			continue;					\
> 		if (time_check_expr) {					\
> 			VAL = READ_ONCE(*__PTR);			\
> 			break;						\
> 		}							\
> 		__n = 0;						\
> 	}								\
> 	(typeof(*ptr))VAL;						\
> })
> #endif

That looks better to me, thanks.

Will



More information about the linux-arm-kernel mailing list