[PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications

Sibi Sankar quic_sibis at quicinc.com
Mon Mar 25 22:25:55 PDT 2024



On 3/22/24 16:15, Lukasz Luba wrote:
> Hi Sibi,
> 
> On 3/1/24 05:31, Sibi Sankar wrote:
>>
>>
>> On 2/29/24 19:45, Lukasz Luba wrote:
>>>
>>>
>>> On 2/29/24 12:11, Cristian Marussi wrote:
>>>> On Thu, Feb 29, 2024 at 11:45:41AM +0000, Lukasz Luba wrote:
>>>>>
>>>>>
>>>>> On 2/29/24 11:28, Cristian Marussi wrote:
>>>>>> On Thu, Feb 29, 2024 at 10:22:39AM +0000, Lukasz Luba wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 2/29/24 09:59, Lukasz Luba wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2/28/24 17:00, Sibi Sankar wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 2/28/24 18:54, Lukasz Luba wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2/27/24 18:16, Sibi Sankar wrote:
>>>>>>>>>>> Register for limit change notifications if supported and use
>>>>>>>>>>> the throttled
>>>>>>>>>>> frequency from the notification to apply HW pressure.
>>>>>>>>>
>>>>>>>>> Lukasz,
>>>>>>>>>
>>>>>>>>> Thanks for taking time to review the series!
>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Signed-off-by: Sibi Sankar <quic_sibis at quicinc.com>
>>>>>>>>>>> ---
>>>>>>>>>>>
>>>>>>>>>>> v3:
>>>>>>>>>>> * Sanitize range_max received from the notifier. [Pierre]
>>>>>>>>>>> * Update commit message.
>>>>>>>>>>>
>>>>>>>>>>> � drivers/cpufreq/scmi-cpufreq.c | 29 
>>>>>>>>>>> ++++++++++++++++++++++++++++-
>>>>>>>>>>> � 1 file changed, 28 insertions(+), 1 deletion(-)
>>>>>>>>>>>
>>>>>>>>>>> diff --git a/drivers/cpufreq/scmi-cpufreq.c
>>>>>>>>>>> b/drivers/cpufreq/scmi-cpufreq.c
>>>>>>>>>>> index 76a0ddbd9d24..78b87b72962d 100644
>>>>>>>>>>> --- a/drivers/cpufreq/scmi-cpufreq.c
>>>>>>>>>>> +++ b/drivers/cpufreq/scmi-cpufreq.c
>>>>>>>>>>> @@ -25,9 +25,13 @@ struct scmi_data {
>>>>>>>>>>> ����� int domain_id;
>>>>>>>>>>> ����� int nr_opp;
>>>>>>>>>>> ����� struct device *cpu_dev;
>>>>>>>>>>> +��� struct cpufreq_policy *policy;
>>>>>>>>>>> ����� cpumask_var_t opp_shared_cpus;
>>>>>>>>>>> +��� struct notifier_block limit_notify_nb;
>>>>>>>>>>> � };
>>>>>>>>>>> +const struct scmi_handle *handle;
>>>>>>>>
>>>>>>>> I've missed this bit here.
>>>>>>>
>>>>>>> So for this change we actually have to ask Cristian or Sudeep
>>>>>>> because I'm not sure if we have only one 'handle' instance
>>>>>>> for all cpufreq devices.
>>>>>>>
>>>>>>> If we have different 'handle' we cannot move it to the
>>>>>>> global single pointer.
>>>>>>>
>>>>>>> Sudeep, Cristian what do you think?
>>>>>>
>>>>>> I was just replying noticing this :D .... since SCMI drivers can be
>>>>>> probed multiple times IF you defined multiple scmi top nodes in 
>>>>>> your DT
>>>>>> containing the same protocol nodes, they receive a distinct 
>>>>>> sdev/handle/ph
>>>>>> for each probe...so any attempt to globalize these wont work...BUT...
>>>>>>
>>>>>> ...this is a bit of a weird setup BUT it is not against the spec 
>>>>>> and it can
>>>>>> be used to parallelize more the SCMI accesses to disjont set of 
>>>>>> resources
>>>>>> within the same protocol (a long story here...) AND this type of 
>>>>>> setup is
>>>>>> something that it is already used by some other colleagues of Sibi 
>>>>>> working
>>>>>> on a different line of products (AFAIK)...
>>>>>>
>>>>>> So, for these reasons, usually, all the other SCMI drivers have 
>>>>>> per-instance
>>>>>> non-global references to handle/sdev/ph....
>>>>>>
>>>>>> ...having said that, thought, looking at the structure of CPUFReq
>>>>>> drivers, I am not sure that they can stand such a similar setup
>>>>>> where multiple instances of this same driver are probed
>>>>>>
>>>>>> .... indeed the existent *ph refs above is already global....so it 
>>>>>> wont already
>>>>>> work anyway in case of multiple instances now...
>>>>>>
>>>>>> ...and if I look at how CPUFreq expects the signature of 
>>>>>> scmi_cpufreq_get_rate()
>>>>>> to be annd how it is implemented now using the global *ph 
>>>>>> reference, it is
>>>>>> clearly already not working cleanly on a multi-instance setup...
>>>>>>
>>>>>> ...now...I can imagine how to (maybe) fix the above removing the 
>>>>>> globals and
>>>>>> fixing this, BUT the question, more generally, is CPUFreq supposed 
>>>>>> to work at all in
>>>>>> this multi-probed mode of operation ?
>>>>>> Does it even make sense to be able to support this in CPUFREQ ?
>>>>>>
>>>>>> (as an example in cpufreq,c there is static global cpufreq_driver
>>>>>>    pointing to the arch-specific configured driver BUT that also 
>>>>>> holds
>>>>>>    some .driver_data AND that cleraly wont be instance specific if 
>>>>>> you
>>>>>>    probe multiple times and register with CPUFreq multiple times...)
>>>>>>
>>>>>>    More questions than answers here :D
>>>>>>
>>>>>
>>>>> Thanks Cristian for instant response. Yes, indeed now we have more
>>>>> questions :) (which is good). But that's good description of the
>>>>> situation.
>>>>>
>>>>> So lets consider a few option what we could do now:
>>>>> 1. Let Sibi add another global state the 'handle' but add
>>>>>     a BUG_ON() or WARN_ON() in the probe path if the next
>>>>>     'handle' instance is different than already set in global.
>>>>>     This would simply mean that we don't support (yet)
>>>>>     such configuration in a platform. As you said, we
>>>>>     already have the *ph global, so maybe such platforms
>>>>>     with multiple instances for this particular cpufreq and
>>>>>     performance protocol don't exist yet.
>>>>
>>>> Yes this is the quickst way (and a WARN_ON() is better I'd say) but 
>>>> there
>>>> are similar issues of "unicity" currently already with another 
>>>> vendor SCMI
>>>> drivers and custom protocol currently under review, so I was 
>>>> thinking to
>>>> add a new common mechanism in SCMI to handle this ... not thought about
>>>> this really in depth and I want to chat with Sudeep about this...
>>>>
>>>>> 2. Ask Sibi to wait with this change, till we refactor the
>>>>>     exiting driver such that it could support easily those
>>>>>     multiple instances. Then pick up this patch set.
>>>>>     Although, we would also like to have those notifications from our
>>>>>     Juno SCP reference FW, so the feature is useful.
>>>>> 3. Ask Sibi to refactor his patch to somehow get the 'handle'
>>>>>     in different way, using exiting code and not introduce this 
>>>>> global.
>>>>>
>>>>
>>>>> IHMO we could do this in steps: 1. and then 2. When
>>>>> we create some mock platform to test this refactoring we can
>>>>> start cleaning it.
>>
>> I should be able to volunteer a platform to test against when
>> we have things ready.
>>
>>>>>
>>>>
>>>> Both of these options really beg an answer to my original previous q
>>>> question...if we somehow enable this multi-probe support in the
>>>> scmi-cpufreq.c driver by avoiding glbals refs, does this work at all in
>>>> the context of CPUFreq ?
>>>
>>> I don't know yet.
>>>
>>>>
>>>> ...or it is just that CPUFreq cannot handle such a configuration (and
>>>> maybe dont want to) and so the only solution here is just 1. at 
>>>> first and
>>>> then a common refined mechanism (as mentioned above) to ensure this 
>>>> "unicity"
>>>> of the probes for some drivers ?
>>>
>>> This sounds reasonable.
>>>
>>>>
>>>> I'm not familiar enough to grasp if this "multi-probed" mode of 
>>>> operation is
>>>> allowed/supported by CPUFreq and, more important, if it makes any sense
>>>> at all to be a supported mode...
>>>>
>>>
>>> OK, let me check some stuff in the code and think for a while on that.
>>> Thanks Cristian!
>>>
>>> Sibi, please give me a few days. In the meantime you can continue
>>> on the 'boost' patch set probably.
>>
>> sure, thanks. I've plenty things to send out so no hurry ;)
>>
>> -Sibi
>>
> 
> I've went through the cpufreq. It's quite complicated how those
> policies, cpus, drivers are setup. Although, IHMO we should be
> safe with you current proposal in this patch.
> 
> As we discussed with Cristian, we can take that approach further.
> 
> Therefore, you can add:
> 
> Reviewed-by: Lukasz Luba <lukasz.luba at arm.com>

Thanks, I'll re-spin the series with a WARN_ON() in the
interim.

-Sibi

> 
> Regards,
> Lukasz



More information about the linux-arm-kernel mailing list