[PATCH v8 10/12] bpf/rqspinlock: Use smp_cond_load_acquire_timeout()
Alexei Starovoitov
alexei.starovoitov at gmail.com
Mon Dec 15 13:40:06 PST 2025
On Sun, Dec 14, 2025 at 8:51 PM Ankur Arora <ankur.a.arora at oracle.com> wrote:
>
> /**
> * resilient_queued_spin_lock_slowpath - acquire the queued spinlock
> * @lock: Pointer to queued spinlock structure
> @@ -415,7 +415,9 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
> */
> if (val & _Q_LOCKED_MASK) {
> RES_RESET_TIMEOUT(ts, RES_DEF_TIMEOUT);
> - res_smp_cond_load_acquire(&lock->locked, !VAL || RES_CHECK_TIMEOUT(ts, timeout_err, _Q_LOCKED_MASK) < 0);
> + smp_cond_load_acquire_timeout(&lock->locked, !VAL,
> + (timeout_err = clock_deadlock(lock, _Q_LOCKED_MASK, &ts)),
> + ts.duration);
I'm pretty sure we already discussed this and pointed out that
this approach is not acceptable.
We cannot call ktime_get_mono_fast_ns() first.
That's why RES_CHECK_TIMEOUT() exists and it does
if (!(ts).spin++)
before doing the first check_timeout() that will do ktime_get_mono_fast_ns().
Above is a performance critical lock acquisition path where
pending is spinning on the lock word waiting for the owner to
release the lock.
Adding unconditional ktime_get_mono_fast_ns() will destroy
performance for quick critical section.
More information about the linux-arm-kernel
mailing list