[patch 05/13] mm/pagemap: Clenaup PREEMPT_COUNT leftovers
Thomas Gleixner
tglx at linutronix.de
Mon Sep 14 16:42:14 EDT 2020
CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.
Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
Cc: Andrew Morton <akpm at linux-foundation.org>
Cc: linux-mm at kvack.org
---
include/linux/pagemap.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -168,9 +168,7 @@ void release_pages(struct page **pages,
static inline int __page_cache_add_speculative(struct page *page, int count)
{
#ifdef CONFIG_TINY_RCU
-# ifdef CONFIG_PREEMPT_COUNT
- VM_BUG_ON(!in_atomic() && !irqs_disabled());
-# endif
+ VM_BUG_ON(preemptible())
/*
* Preempt must be disabled here - we rely on rcu_read_lock doing
* this for us.
More information about the linux-um
mailing list