[PATCH v13 04/17] preempt: Introduce __preempt_count_{sub, add}_return()

Andreas Hindborg a.hindborg at kernel.org
Tue Nov 4 04:30:00 PST 2025


Lyude Paul <lyude at redhat.com> writes:

> From: Boqun Feng <boqun.feng at gmail.com>
>
> In order to use preempt_count() to tracking the interrupt disable
> nesting level, __preempt_count_{add,sub}_return() are introduced, as
> their name suggest, these primitives return the new value of the
> preempt_count() after changing it. The following example shows the usage
> of it in local_interrupt_disable():
>
> 	// increase the HARDIRQ_DISABLE bit
> 	new_count = __preempt_count_add_return(HARDIRQ_DISABLE_OFFSET);
>
> 	// if it's the first-time increment, then disable the interrupt
> 	// at hardware level.
> 	if (new_count & HARDIRQ_DISABLE_MASK == HARDIRQ_DISABLE_OFFSET) {
> 		local_irq_save(flags);
> 		raw_cpu_write(local_interrupt_disable_state.flags, flags);
> 	}
>
> Having these primitives will avoid a read of preempt_count() after
> changing preempt_count() on certain architectures.
>
> Signed-off-by: Boqun Feng <boqun.feng at gmail.com>
>
> ---
> V10:
> * Add commit message I forgot
> * Rebase against latest pcpu_hot changes
> V11:
> * Remove CONFIG_PROFILE_ALL_BRANCHES workaround from
>   __preempt_count_add_return()
>
>  arch/arm64/include/asm/preempt.h | 18 ++++++++++++++++++
>  arch/s390/include/asm/preempt.h  | 10 ++++++++++
>  arch/x86/include/asm/preempt.h   | 10 ++++++++++
>  include/asm-generic/preempt.h    | 14 ++++++++++++++
>  4 files changed, 52 insertions(+)
>
> diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
> index 932ea4b620428..0dd8221d1bef7 100644
> --- a/arch/arm64/include/asm/preempt.h
> +++ b/arch/arm64/include/asm/preempt.h
> @@ -55,6 +55,24 @@ static inline void __preempt_count_sub(int val)
>  	WRITE_ONCE(current_thread_info()->preempt.count, pc);
>  }
>  
> +static inline int __preempt_count_add_return(int val)
> +{
> +	u32 pc = READ_ONCE(current_thread_info()->preempt.count);
> +	pc += val;
> +	WRITE_ONCE(current_thread_info()->preempt.count, pc);
> +
> +	return pc;
> +}
> +
> +static inline int __preempt_count_sub_return(int val)
> +{
> +	u32 pc = READ_ONCE(current_thread_info()->preempt.count);
> +	pc -= val;
> +	WRITE_ONCE(current_thread_info()->preempt.count, pc);
> +
> +	return pc;
> +}
> +

I am wondering how this works when preemption is enabled? Will the
kernel never preempt itself? I would think this would have to be atomic?
I can see the surrounding code is using the same pattern, so it is
probably fine. But I am curious as to why that is.


Best regards,
Andreas Hindborg






More information about the linux-arm-kernel mailing list