[PATCH] cpuidle: riscv-sbi: Add cluster_pm_enter()/exit()
Ulf Hansson
ulf.hansson at linaro.org
Tue Apr 30 01:12:43 PDT 2024
On Mon, 29 Apr 2024 at 18:26, Nick Hu <nick.hu at sifive.com> wrote:
>
> On Tue, Apr 30, 2024 at 12:22 AM Nick Hu <nick.hu at sifive.com> wrote:
> >
> > Hi Ulf
> >
> > On Mon, Apr 29, 2024 at 10:32 PM Ulf Hansson <ulf.hansson at linaro.org> wrote:
> > >
> > > On Mon, 26 Feb 2024 at 07:51, Nick Hu <nick.hu at sifive.com> wrote:
> > > >
> > > > When the cpus in the same cluster are all in the idle state, the kernel
> > > > might put the cluster into a deeper low power state. Call the
> > > > cluster_pm_enter() before entering the low power state and call the
> > > > cluster_pm_exit() after the cluster woken up.
> > > >
> > > > Signed-off-by: Nick Hu <nick.hu at sifive.com>
> > >
> > > I was not cced this patch, but noticed that this patch got queued up
> > > recently. Sorry for not noticing earlier.
> > >
> > > If not too late, can you please drop/revert it? We should really move
> > > away from the CPU cluster notifiers. See more information below.
> > >
> > > > ---
> > > > drivers/cpuidle/cpuidle-riscv-sbi.c | 24 ++++++++++++++++++++++--
> > > > 1 file changed, 22 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
> > > > index e8094fc92491..298dc76a00cf 100644
> > > > --- a/drivers/cpuidle/cpuidle-riscv-sbi.c
> > > > +++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
> > > > @@ -394,6 +394,7 @@ static int sbi_cpuidle_pd_power_off(struct generic_pm_domain *pd)
> > > > {
> > > > struct genpd_power_state *state = &pd->states[pd->state_idx];
> > > > u32 *pd_state;
> > > > + int ret;
> > > >
> > > > if (!state->data)
> > > > return 0;
> > > > @@ -401,6 +402,10 @@ static int sbi_cpuidle_pd_power_off(struct generic_pm_domain *pd)
> > > > if (!sbi_cpuidle_pd_allow_domain_state)
> > > > return -EBUSY;
> > > >
> > > > + ret = cpu_cluster_pm_enter();
> > > > + if (ret)
> > > > + return ret;
> > >
> > > Rather than using the CPU cluster notifiers, consumers of the genpd
> > > can register themselves to receive genpd on/off notifiers.
> > >
> > > In other words, none of this should be needed, right?
> > >
> > Thanks for the feedback!
> > Maybe I miss something, I'm wondering about a case like below:
> > If we have a shared L2 cache controller inside the cpu cluster power
> > domain and we add this controller to be a consumer of the power
> > domain, Shouldn't the genpd invoke the domain idle only after the
> > shared L2 cache controller is suspended?
> > Is there a way that we can put the L2 cache down while all cpus in the
> > same cluster are idle?
> > > [...]
> Sorry, I made some mistake in my second question.
> Update the question here:
> Is there a way that we can save the L2 cache states while all cpus in the
> same cluster are idle and the cluster could be powered down?
If the L2 cache is a consumer of the cluster, the consumer driver for
the L2 cache should register for genpd on/off notifiers.
The device representing the L2 cache needs to be enabled for runtime
PM, to be taken into account correctly by the cluster genpd. In this
case, the device should most likely remain runtime suspended, but
instead rely on the genpd on/off notifiers to understand when
save/restore of the cache states should be done.
Kind regards
Uffe
More information about the linux-riscv
mailing list