[PATCH v4 4/4] arm64: dts: qcom: sdm845: Add CPU BWMON
Rajendra Nayak
quic_rjendra at quicinc.com
Tue Jun 28 06:15:18 PDT 2022
On 6/28/2022 4:20 PM, Krzysztof Kozlowski wrote:
> On 28/06/2022 12:36, Rajendra Nayak wrote:
>>
>> On 6/27/2022 6:09 PM, Krzysztof Kozlowski wrote:
>>> On 26/06/2022 05:28, Bjorn Andersson wrote:
>>>> On Thu 23 Jun 07:58 CDT 2022, Krzysztof Kozlowski wrote:
>>>>
>>>>> On 23/06/2022 08:48, Rajendra Nayak wrote:
>>>>>>>>> diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
>>>>>>>>> index 83e8b63f0910..adffb9c70566 100644
>>>>>>>>> --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
>>>>>>>>> +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
>>>>>>>>> @@ -2026,6 +2026,60 @@ llcc: system-cache-controller at 1100000 {
>>>>>>>>> interrupts = <GIC_SPI 582 IRQ_TYPE_LEVEL_HIGH>;
>>>>>>>>> };
>>>>>>>>>
>>>>>>>>> + pmu at 1436400 {
>>>>>>>>> + compatible = "qcom,sdm845-cpu-bwmon";
>>>>>>>>> + reg = <0 0x01436400 0 0x600>;
>>>>>>>>> +
>>>>>>>>> + interrupts = <GIC_SPI 581 IRQ_TYPE_LEVEL_HIGH>;
>>>>>>>>> +
>>>>>>>>> + interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
>>>>>>>>> + <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
>>>>>>>>> + interconnect-names = "ddr", "l3c";
>>>>>>>>
>>>>>>>> Is this the pmu/bwmon instance between the cpu and caches or the one between the caches and DDR?
>>>>>>>
>>>>>>> To my understanding this is the one between CPU and caches.
>>>>>>
>>>>>> Ok, but then because the OPP table lists the DDR bw first and Cache bw second, isn't the driver
>>>>>> ending up comparing the bw values thrown by the pmu against the DDR bw instead of the Cache BW?
>>>>>
>>>>> I double checked now and you're right.
>>>>>
>>>>>> Atleast with my testing on sc7280 I found this to mess things up and I always was ending up at
>>>>>> higher OPPs even while the system was completely idle. Comparing the values against the Cache bw
>>>>>> fixed it.(sc7280 also has a bwmon4 instance between the cpu and caches and a bwmon5 between the cache
>>>>>> and DDR)
>>>>>
>>>>> In my case it exposes different issue - under performance. Somehow the
>>>>> bwmon does not report bandwidth high enough to vote for high bandwidth.
>>>>>
>>>>> After removing the DDR interconnect and bandwidth OPP values I have for:
>>>>> sysbench --threads=8 --time=60 --memory-total-size=20T --test=memory
>>>>> --memory-block-size=4M run
>>>>>
>>>>> 1. Vanilla: 29768 MB/s
>>>>> 2. Vanilla without CPU votes: 8728 MB/s
>>>>> 3. Previous bwmon (voting too high): 32007 MB/s
>>>>> 4. Fixed bwmon 24911 MB/s
>>>>> Bwmon does not vote for maximum L3 speed:
>>>>> bwmon report 9408 MB/s (thresholds set: <9216000 15052801>
>>>>> )
>>>>> osm l3 aggregate 14355 MBps -> 897 MHz, level 7, bw 14355 MBps
>>>>>
>>>>> Maybe that's just problem with missing governor which would vote for
>>>>> bandwidth rounding up or anticipating higher needs.
>>>>>
>>>>>>>> Depending on which one it is, shouldn;t we just be scaling either one and not both the interconnect paths?
>>>>>>>
>>>>>>> The interconnects are the same as ones used for CPU nodes, therefore if
>>>>>>> we want to scale both when scaling CPU, then we also want to scale both
>>>>>>> when seeing traffic between CPU and cache.
>>>>>>
>>>>>> Well, they were both associated with the CPU node because with no other input to decide on _when_
>>>>>> to scale the caches and DDR, we just put a mapping table which simply mapped a CPU freq to a L3 _and_
>>>>>> DDR freq. So with just one input (CPU freq) we decided on what should be both the L3 freq and DDR freq.
>>>>>>
>>>>>> Now with 2 pmu's, we have 2 inputs, so we can individually scale the L3 based on the cache PMU
>>>>>> counters and DDR based on the DDR PMU counters, no?
>>>>>>
>>>>>> Since you said you have plans to add the other pmu support as well (bwmon5 between the cache and DDR)
>>>>>> how else would you have the OPP table associated with that pmu instance? Would you again have both the
>>>>>> L3 and DDR scale based on the inputs from that bwmon too?
>>>>>
>>>>> Good point, thanks for sharing. I think you're right. I'll keep only the
>>>>> l3c interconnect path.
>>>>>
>>>>
>>>> If I understand correctly, <&osm_l3 MASTER_OSM_L3_APPS &osm_l3
>>>> SLAVE_OSM_L3> relates to the L3 cache speed, which sits inside the CPU
>>>> subsystem. As such traffic hitting this cache will not show up in either
>>>> bwmon instance.
>>>>
>>>> The path <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>
>>>> affects the DDR frequency. So the traffic measured by the cpu-bwmon
>>>> would be the CPU subsystems traffic that missed the L1/L2/L3 caches and
>>>> hits the memory bus towards DDR.
>>
>> That seems right, looking some more into the downstream code and register definitions
>> I see the 2 bwmon instances actually lie on the path outside CPU SS towards DDR,
>> first one (bwmon4) is between the CPUSS and LLCC (system cache) and the second one
>> (bwmon5) between LLCC and DDR. So we should use the counters from bwmon4 to
>> scale the CPU-LLCC path (and not L3), on sc7280 that would mean splitting the
>> <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3> into
>> <&gem_noc MASTER_APPSS_PROC 3 &gem_noc SLAVE_LLCC 3> (voting based on the bwmon4 inputs)
>> and <&mc_virt MASTER_LLCC 3 &mc_virt SLAVE_EBI1 3> (voting based on the bwmon5 inputs)
>> and similar for sdm845 too.
>>
>> L3 should perhaps still be voted based on the cpu freq as done today.
>
> This would mean that original bandwidth values (800 - 7216 MB/s) were
> correct. However we have still your observation that bwmon kicks in very
> fast and my measurements that sampled bwmon data shows bandwidth ~20000
> MB/s.
Right, thats because the bandwidth supported between the cpu<->llcc path is much higher
than the DDR frequencies. For instance on sc7280, I see (2288 - 15258 MB/s) for LLCC while
the DDR max is 8532 MB/s.
>
>
> Best regards,
> Krzysztof
More information about the linux-arm-kernel
mailing list