[PATCH] sched: dynamic config sd_flags if described in DT
王擎
wangqing at vivo.com
Tue Mar 22 23:45:29 PDT 2022
>>
>>> (1) Can you share more information about your CPU topology?
>>>
>>> I guess it is a single DSU (DynamIQ Shared Unit) ARMv9 system with 8
>>> CPUs? So L3 spans over [CPU0..CPU7].
>>>
>>> You also mentioned complexes. Am I right in assuming that [CPU0..CPU3]
>>> are Cortex-A510 cores where each 2 CPUs share a complex?
>>>
>>> What kind of uarch are the CPUs in [CPU4..CPU7]? Are they Cortex-A510's
>>> as well? I'm not sure after reading your email:
>>
>> Yes, Android systems are currently used default_domain with wrong sd_flags,
>> take Qualcomm SM8450 as an example, the CPU and cache topology(1+3+4):
>
>Ah, your system looks like this:
>
> .---------------.
>CPU |0 1 2 3 4 5 6 7|
> +---------------+
>uarch |l l l l m m m b| (so called tri-gear: little, medium, big)
> +---------------+
> L2 | | | | | | |
> +---------------+
> L3 |<-- -->|
> +---------------+
> |<-- cluster -->|
> +---------------+
> |<-- DSU -->|
> '---------------'
>
>> | DSU |
>> | cluster0 | cluster1 |cluster2|
>
>^^^ Those aren't real clusters, hence the name <Phantom> SD. The cluster
>is [CPU0...CPU7]. Android uses Phantom SD to subgroup CPUs with the same
>uarch. That's why you get your MC->DIE SD's on your system and
>SHARE_PKG_RESOURCES (ShPR) on MC, rather DIE.
>
>Note, you should already have an asymmetric SD hierarchy. CPU7 should
>only have DIE not MC! Each CPU has its own SD hierarchy!
>
>> | core0 core1 core2 core3 | core4 core5 core6 | core7 |
>> | complex0 | complex1 | ------------------------ |
>> | L2 cache | L2 cache | L2 | L2 | L2 | L2 |
>> | L3 cache |
>>
>> The sched domain now:
>> DIE[0-7] (no SD_SHARE_PKG_RESOURCES)
>> MC[0-3][4-6][7] (SD_SHARE_PKG_RESOURCES)
>>
>> The sched domain should be:
>> DIE[0-7] (SD_SHARE_PKG_RESOURCES)
>> MC[0-3][4-6][7] (no SD_SHARE_PKG_RESOURCES)
>
>First remember, using Phantom SD in Android is already a hack. Normally
>your system should only have an MC SD for each CPU (with ShPR).
>
>Now, if you want to move ShPR from MC to DIE then a custom topology
>table should do it, i.e. you don't have to change any generic task
>scheduler code.
>
>static inline int cpu_cpu_flags(void)
>{
> return SD_SHARE_PKG_RESOURCES;
>}
>
>static struct sched_domain_topology_level custom_topology[] = {
>#ifdef CONFIG_SCHED_SMT
> { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
>#endif
>
>#ifdef CONFIG_SCHED_CLUSTER
> { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
>#endif
>
>#ifdef CONFIG_SCHED_MC
> { cpu_coregroup_mask, SD_INIT_NAME(MC) },
> ^^^^
>#endif
> { cpu_cpu_mask, cpu_cpu_flags, SD_INIT_NAME(DIE) },
> ^^^^^^^^^^^^^
> { NULL, },
>};
>
>set_sched_topology(custom_topology);
However, due to the limitation of GKI, we cannot change the sd topology
by ourselves. But we can configure CPU and cache topology through DT.
So why not get the ShPR from DT first? If not configured, use the default.
>
>> *CLS[0-1][2-3](SD_SHARE_PKG_RESOURCES)
>
>But why do you want to have yet another SD underneath MC for CPU0-CPU3?
>sd_llc is assigned to the highest ShPR SD, which would be DIE.
We want do something from the shared L2 cache(for complex, like walt),
you can ignore it here and talk about it when we done.
Thanks,
Wang
More information about the linux-riscv
mailing list