[PATCH 1/2] cpuidle: Avoid calls to cpuidle_resume|pause() for s2idle

Rafael J. Wysocki rjw at rjwysocki.net
Sat Oct 9 08:42:53 PDT 2021


On Wednesday, October 6, 2021 3:10:55 PM CEST Ulf Hansson wrote:
> On Wed, 6 Oct 2021 at 12:22, Maulik Shah <mkshah at codeaurora.org> wrote:
> >
> > Hi,
> >
> > On 9/29/2021 8:14 PM, Ulf Hansson wrote:
> > > In s2idle_enter(), cpuidle_resume|pause() are invoked to re-allow calls to
> > > the cpuidle callbacks during s2idle operations. This is needed because
> > > cpuidle is paused in-between in dpm_suspend_noirq() and dpm_resume_noirq().
> > >
> > > However, calling cpuidle_resume|pause() from s2idle_enter() looks a bit
> > > superfluous, as it also causes all CPUs to be waken up when the first CPU
> > > wakes up from s2idle.
> >
> > Thanks for the patch. This can be good optimization to avoid waking up
> > all CPUs always.
> >
> > >
> > > Therefore, let's drop the calls to cpuidle_resume|pause() from
> > > s2idle_enter(). To make this work, let's also adopt the path in the
> > > cpuidle_idle_call() to allow cpuidle callbacks to be invoked for s2idle,
> > > even if cpuidle has been paused.
> > >
> > > Signed-off-by: Ulf Hansson <ulf.hansson at linaro.org>
> > > ---
> > >   drivers/cpuidle/cpuidle.c |  7 ++++++-
> > >   include/linux/cpuidle.h   |  2 ++
> > >   kernel/power/suspend.c    |  2 --
> > >   kernel/sched/idle.c       | 40 ++++++++++++++++++++++-----------------
> > >   4 files changed, 31 insertions(+), 20 deletions(-)
> > >
> > > diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> > > index ef2ea1b12cd8..c76747e497e7 100644
> > > --- a/drivers/cpuidle/cpuidle.c
> > > +++ b/drivers/cpuidle/cpuidle.c
> > > @@ -49,7 +49,12 @@ void disable_cpuidle(void)
> > >   bool cpuidle_not_available(struct cpuidle_driver *drv,
> > >                          struct cpuidle_device *dev)
> > >   {
> > > -     return off || !initialized || !drv || !dev || !dev->enabled;
> > > +     return off || !drv || !dev || !dev->enabled;
> > > +}
> > > +
> > > +bool cpuidle_paused(void)
> > > +{
> > > +     return !initialized;
> > >   }
> > >
> > >   /**
> > > diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
> > > index fce476275e16..51698b385ab5 100644
> > > --- a/include/linux/cpuidle.h
> > > +++ b/include/linux/cpuidle.h
> > > @@ -165,6 +165,7 @@ extern void cpuidle_pause_and_lock(void);
> > >   extern void cpuidle_resume_and_unlock(void);
> > >   extern void cpuidle_pause(void);
> > >   extern void cpuidle_resume(void);
> > > +extern bool cpuidle_paused(void);
> > >   extern int cpuidle_enable_device(struct cpuidle_device *dev);
> > >   extern void cpuidle_disable_device(struct cpuidle_device *dev);
> > >   extern int cpuidle_play_dead(void);
> > > @@ -204,6 +205,7 @@ static inline void cpuidle_pause_and_lock(void) { }
> > >   static inline void cpuidle_resume_and_unlock(void) { }
> > >   static inline void cpuidle_pause(void) { }
> > >   static inline void cpuidle_resume(void) { }
> > > +static inline bool cpuidle_paused(void) {return true; }
> > >   static inline int cpuidle_enable_device(struct cpuidle_device *dev)
> > >   {return -ENODEV; }
> > >   static inline void cpuidle_disable_device(struct cpuidle_device *dev) { }
> > > diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
> > > index eb75f394a059..388a5de4836e 100644
> > > --- a/kernel/power/suspend.c
> > > +++ b/kernel/power/suspend.c
> > > @@ -97,7 +97,6 @@ static void s2idle_enter(void)
> > >       raw_spin_unlock_irq(&s2idle_lock);
> > >
> > >       cpus_read_lock();
> > > -     cpuidle_resume();
> > >
> > >       /* Push all the CPUs into the idle loop. */
> > >       wake_up_all_idle_cpus();
> >
> > wake_up_all_idle_cpus() will still cause all CPUs to be woken up when
> > first cpu wakes up.
> >
> > say for example,
> > 1. device goes to s2idle suspend.
> > 2. one CPU wakes up to handle irq (irq is not a wake irq but left
> > enabled at GIC because of IRQF_NOSUSPEND flag) so such irq will not
> > break suspend.
> > 3. The cpu handles the irq.
> > 4. same cpu don't break s2idle_loop() and goes to s2idle_enter() where
> > it wakes up all existing idle cpus due to wake_up_all_idle_cpus()
> > 5. all of CPUs again enter s2idle.
> >
> > to avoid waking up all CPUs in above case, something like below snip may
> > help (i have not tested yet),
> >
> > when CPUs are in s2idle_loop(),
> >
> > 1. set the s2idle state to enter.
> > 2. wake up all cpus from shallow state, so that they can re-enter
> > deepest state.
> > 3. Forever loop until a break with some wake irq.
> > 4. clear the s2idle state.
> > 5. wake up all cpus from deepest state so that they can now stay in
> > shallow state/running state.
> >
> > void s2idle_loop(void)
> > {
> >
> > +       s2idle_state = S2IDLE_STATE_ENTER;
> > +       /* Push all the CPUs to enter deepest available state */
> > +       wake_up_all_idle_cpus();
> >          for (;;) {
> >                  if (s2idle_ops && s2idle_ops->wake) {
> >                          if (s2idle_ops->wake())

So generally you don't know what will break loose in the ->wake() callback
and so you don't know what idle states the CPUs will end up in before
s2idle_enter().

You could optimize for the case when ->wake() is not present, though.

> >                                 ..
> >                  s2idle_enter();
> >          }
> > +       s2idle_state = S2IDLE_STATE_NONE;
> > +       /* Push all the CPUs to enter default_idle() from this point */
> > +       wake_up_all_idle_cpus();
> > }






More information about the linux-arm-kernel mailing list