[PATCH] cpuidle: riscv-sbi: Add cluster_pm_enter()/exit()

Anup Patel anup at brainfault.org
Tue May 14 07:54:15 PDT 2024


On Tue, May 14, 2024 at 7:53 PM Anup Patel <anup at brainfault.org> wrote:
>
> Hi Nick,
>
> On Tue, May 14, 2024 at 3:20 PM Nick Hu <nick.hu at sifive.com> wrote:
> >
> > Hi Ulf,
> >
> > Thank you for your valuable suggestion.
> > I sincerely apologize for the delay in responding to your message. We
> > have diligently worked on experimenting with the suggestion you
> > provided.
> >
> > As per your recommendation, we have incorporated the "power-domains=<>
> > property" into the consumer's node, resulting in modifications to the
> > DTS as illustrated below:
> >
> > cpus {
> >     ...
> >      domain-idle-states {
> >            CLUSTER_SLEEP:cluster-sleep {
> >                         compatible = "domain-idle-state";
> >                         ...
> >             }
> >      }
> >      power-domains {
> >             ...
> >             ...
> >             CLUSTER_PD: clusterpd {
> >                     domain-idle-states = <&CLUSTER_SLEEP>;
> >             };
> >      }
> > }
> > soc {
> >       deviceA at xxx{
> >              ...
> >              power-domains = <&CLUSTER_PD>;
> >              ...
> >       }
> > }
> >
> > However, this adjustment has led to an issue where the probe for
> > 'deviceA' is deferred by 'device_links_check_suppliers()' within
> > 'really_probe()'. In an attempt to mitigate this issue, we
> > experimented with a workaround by adding the attribute
> > "status="disabled"" to the 'CLUSTER_PD' node. This action aimed to
> > prevent the creation of a device link between 'deviceA' and
> > 'CLUSTER_PD'. Nevertheless, we remain uncertain about the
> > appropriateness of this solution.
> >
> > Do you have suggestions on how to effectively address this issue?
>
> I totally missed this email since I was not CC'ed sorry about that. Please
> use get_maintainers.pl when sending patches.

I stand corrected. This patch had landed in the "spam" folder. I don't know why.

Regards,
Anup

>
> The genpd_add_provider() (called by of_genpd_add_provider_simple())
> does mark the power-domain DT node as initialized (fwnode_dev_initialized())
> so after the cpuidle-riscv-sbi driver is probed the 'deviceA' dependency is
> resolved and 'deviceA' should be probed unless there are other unmet
> dependencies.
>
> Try adding "#define DEBUG" before all includes in drivers/core/base.c
> and add "loglevel=8" in kernel parameters, this will print producer-consumer
> linkage of all devices.
>
> Marking the power-domain DT node as "disabled" is certainly not the
> right way.
>
> Regards,
> Anup
>
> >
> > Regards,
> > Nick
> >
> > On Tue, Apr 30, 2024 at 4:13 PM Ulf Hansson <ulf.hansson at linaro.org> wrote:
> > >
> > > On Mon, 29 Apr 2024 at 18:26, Nick Hu <nick.hu at sifive.com> wrote:
> > > >
> > > > On Tue, Apr 30, 2024 at 12:22 AM Nick Hu <nick.hu at sifive.com> wrote:
> > > > >
> > > > > Hi Ulf
> > > > >
> > > > > On Mon, Apr 29, 2024 at 10:32 PM Ulf Hansson <ulf.hansson at linaro.org> wrote:
> > > > > >
> > > > > > On Mon, 26 Feb 2024 at 07:51, Nick Hu <nick.hu at sifive.com> wrote:
> > > > > > >
> > > > > > > When the cpus in the same cluster are all in the idle state, the kernel
> > > > > > > might put the cluster into a deeper low power state. Call the
> > > > > > > cluster_pm_enter() before entering the low power state and call the
> > > > > > > cluster_pm_exit() after the cluster woken up.
> > > > > > >
> > > > > > > Signed-off-by: Nick Hu <nick.hu at sifive.com>
> > > > > >
> > > > > > I was not cced this patch, but noticed that this patch got queued up
> > > > > > recently. Sorry for not noticing earlier.
> > > > > >
> > > > > > If not too late, can you please drop/revert it? We should really move
> > > > > > away from the CPU cluster notifiers. See more information below.
> > > > > >
> > > > > > > ---
> > > > > > >  drivers/cpuidle/cpuidle-riscv-sbi.c | 24 ++++++++++++++++++++++--
> > > > > > >  1 file changed, 22 insertions(+), 2 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
> > > > > > > index e8094fc92491..298dc76a00cf 100644
> > > > > > > --- a/drivers/cpuidle/cpuidle-riscv-sbi.c
> > > > > > > +++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
> > > > > > > @@ -394,6 +394,7 @@ static int sbi_cpuidle_pd_power_off(struct generic_pm_domain *pd)
> > > > > > >  {
> > > > > > >         struct genpd_power_state *state = &pd->states[pd->state_idx];
> > > > > > >         u32 *pd_state;
> > > > > > > +       int ret;
> > > > > > >
> > > > > > >         if (!state->data)
> > > > > > >                 return 0;
> > > > > > > @@ -401,6 +402,10 @@ static int sbi_cpuidle_pd_power_off(struct generic_pm_domain *pd)
> > > > > > >         if (!sbi_cpuidle_pd_allow_domain_state)
> > > > > > >                 return -EBUSY;
> > > > > > >
> > > > > > > +       ret = cpu_cluster_pm_enter();
> > > > > > > +       if (ret)
> > > > > > > +               return ret;
> > > > > >
> > > > > > Rather than using the CPU cluster notifiers, consumers of the genpd
> > > > > > can register themselves to receive genpd on/off notifiers.
> > > > > >
> > > > > > In other words, none of this should be needed, right?
> > > > > >
> > > > > Thanks for the feedback!
> > > > > Maybe I miss something, I'm wondering about a case like below:
> > > > > If we have a shared L2 cache controller inside the cpu cluster power
> > > > > domain and we add this controller to be a consumer of the power
> > > > > domain, Shouldn't the genpd invoke the domain idle only after the
> > > > > shared L2 cache controller is suspended?
> > > > > Is there a way that we can put the L2 cache down while all cpus in the
> > > > > same cluster are idle?
> > > > > > [...]
> > > > Sorry, I made some mistake in my second question.
> > > > Update the question here:
> > > > Is there a way that we can save the L2 cache states while all cpus in the
> > > > same cluster are idle and the cluster could be powered down?
> > >
> > > If the L2 cache is a consumer of the cluster, the consumer driver for
> > > the L2 cache should register for genpd on/off notifiers.
> > >
> > > The device representing the L2 cache needs to be enabled for runtime
> > > PM, to be taken into account correctly by the cluster genpd. In this
> > > case, the device should most likely remain runtime suspended, but
> > > instead rely on the genpd on/off notifiers to understand when
> > > save/restore of the cache states should be done.
> > >
> > > Kind regards
> > > Uffe



More information about the linux-riscv mailing list