[PATCH v3 14/51] cpuidle,cpu_pm: Remove RCU fiddling from cpu_pm_{enter,exit}()
Peter Zijlstra
peterz at infradead.org
Thu Jan 12 11:43:28 PST 2023
All callers should still have RCU enabled.
Signed-off-by: Peter Zijlstra (Intel) <peterz at infradead.org>
Reviewed-by: Ulf Hansson <ulf.hansson at linaro.org>
Acked-by: Mark Rutland <mark.rutland at arm.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki at intel.com>
Acked-by: Frederic Weisbecker <frederic at kernel.org>
Tested-by: Tony Lindgren <tony at atomide.com>
Tested-by: Ulf Hansson <ulf.hansson at linaro.org>
---
kernel/cpu_pm.c | 9 ---------
1 file changed, 9 deletions(-)
--- a/kernel/cpu_pm.c
+++ b/kernel/cpu_pm.c
@@ -30,16 +30,9 @@ static int cpu_pm_notify(enum cpu_pm_eve
{
int ret;
- /*
- * This introduces a RCU read critical section, which could be
- * disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know
- * this.
- */
- ct_irq_enter_irqson();
rcu_read_lock();
ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL);
rcu_read_unlock();
- ct_irq_exit_irqson();
return notifier_to_errno(ret);
}
@@ -49,11 +42,9 @@ static int cpu_pm_notify_robust(enum cpu
unsigned long flags;
int ret;
- ct_irq_enter_irqson();
raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
ret = raw_notifier_call_chain_robust(&cpu_pm_notifier.chain, event_up, event_down, NULL);
raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
- ct_irq_exit_irqson();
return notifier_to_errno(ret);
}
More information about the linux-riscv
mailing list