[PATCH v2 0/8] arch_topology: Updates to add socket support and fix cluster ids

Ionela Voinescu ionela.voinescu at arm.com
Thu May 19 09:32:49 PDT 2022


Hi Sudeep,

On Wednesday 18 May 2022 at 10:33:17 (+0100), Sudeep Holla wrote:
> Hi All,
> 
> This series intends to fix some discrepancies we have in the CPU topology
> parsing from the device tree /cpu-map node. Also this diverges from the
> behaviour on a ACPI enabled platform. The expectation is that both DT
> and ACPI enabled systems must present consistent view of the CPU topology.
> 
> Currently we assign generated cluster count as the physical package identifier
> for each CPU which is wrong. The device tree bindings for CPU topology supports
> sockets to infer the socket or physical package identifier for a given CPU.
> Also we don't check if all the cores/threads belong to the same cluster before
> updating their sibling masks which is fine as we don't set the cluster id yet.
> 
> These changes also assigns the cluster identifier as parsed from the device tree
> cluster nodes within /cpu-map without support for nesting of the clusters.
> Finally, it also add support for socket nodes in /cpu-map. With this the
> parsing of exact same information from ACPI PPTT and /cpu-map DT node
> aligns well.
> 
> The only exception is that the last level cache id information can be
> inferred from the same ACPI PPTT while we need to parse CPU cache nodes
> in the device tree.
> 
> P.S: I have not cc-ed Greg and Rafael so that all the users of arch_topology
> agree with the changes first before we include them.
> 
> v1[1]->v2:
> 	- Updated ID validity check include all non-negative value
> 	- Added support to get the device node for the CPU's last level cache
> 	- Added support to build llc_sibling on DT platforms
> 
> [1] https://lore.kernel.org/lkml/20220513095559.1034633-1-sudeep.holla@arm.com
> 
> Sudeep Holla (8):
>   arch_topology: Don't set cluster identifier as physical package identifier
>   arch_topology: Set thread sibling cpumask only within the cluster
>   arch_topology: Set cluster identifier in each core/thread from /cpu-map
>   arch_topology: Add support for parsing sockets in /cpu-map
>   arch_topology: Check for non-negative value rather than -1 for IDs validity
>   arch_topology: Avoid parsing through all the CPUs once a outlier CPU is found
>   of: base: add support to get the device node for the CPU's last level cache
>   arch_topology: Add support to build llc_sibling on DT platforms
> 

Just a recommendation for patch-set structure: it would be best to have
the following sequence to maintain the same scheduler topology and
behaviour when partially applying the set (currently testing this on Juno,
but should be the case for other platforms as well):

2/8 arch_topology: Set thread sibling cpumask only within the cluster
5/8 arch_topology: Check for non-negative value rather than -1 for IDs validity
6/8 arch_topology: Avoid parsing through all the CPUs once a outlier CPU is found

--> these are only preparation/cleanup patches and don't affect current
functionality

7/8 of: base: add support to get the device node for the CPU's last level cache
8/8 arch_topology: Add support to build llc_sibling on DT platforms

--> these will populate llc siblings but this list will be equal to
core siblings (based on package_id) so nothing changes in the scheduler
topology. Even if CONFIG_SCHED_CLUSTER=y, we still have cluster_id=-1 so
nothing will change in that case either, for the patches so far.

1/8 arch_topology: Don't set cluster identifier as physical package identifier

--> 1/8 is the trouble maker if it's the first patch as it will result
in having all CPUs in core_siblings so the topology will be flattened to
just an MC level for a typical b.L system like Juno. But if you add it after
all of the above patches, the llc_siblings will contribute to create the same
MC and DIE levels we expect.

3/8 arch_topology: Set cluster identifier in each core/thread from /cpu-map
4/8 arch_topology: Add support for parsing sockets in /cpu-map

--> Here 3/8 will start creating complications when having clusters in
DT and we have CONFIG_SCHED_CLUSTER=y. But I'll detail this in a reply
to that patch. For CONFIG_SCHED_CLUSTER=n the topology and scheduler
behaviour should be the same as before this set.

Hope it helps,
Ionela.


>  drivers/base/arch_topology.c  | 75 +++++++++++++++++++++++++++--------
>  drivers/of/base.c             | 33 +++++++++++----
>  include/linux/arch_topology.h |  1 +
>  include/linux/of.h            |  1 +
>  4 files changed, 85 insertions(+), 25 deletions(-)
> 
> --
> 2.36.1
> 
> 



More information about the linux-riscv mailing list