[patch 06/13] locking/bitspinlock: Clenaup PREEMPT_COUNT leftovers
Will Deacon
will at kernel.org
Tue Sep 15 12:10:48 EDT 2020
On Mon, Sep 14, 2020 at 10:42:15PM +0200, Thomas Gleixner wrote:
> CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
> removed. Cleanup the leftovers before doing so.
>
> Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
> ---
> include/linux/bit_spinlock.h | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> --- a/include/linux/bit_spinlock.h
> +++ b/include/linux/bit_spinlock.h
> @@ -90,10 +90,8 @@ static inline int bit_spin_is_locked(int
> {
> #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
> return test_bit(bitnum, addr);
> -#elif defined CONFIG_PREEMPT_COUNT
> - return preempt_count();
> #else
> - return 1;
> + return preempt_count();
> #endif
Acked-by: Will Deacon <will at kernel.org>
Will
More information about the linux-um
mailing list