[PATCH v5 7/9] arm64: Topology, rename cluster_id

Morten Rasmussen morten.rasmussen at foss.arm.com
Tue Jan 2 03:30:14 PST 2018


On Tue, Jan 02, 2018 at 10:29:35AM +0800, Xiongfeng Wang wrote:
> Hi,
> 
> On 2017/12/18 20:42, Morten Rasmussen wrote:
> > On Fri, Dec 15, 2017 at 10:36:35AM -0600, Jeremy Linton wrote:
> >> Hi,
> >>
> >> On 12/13/2017 12:02 PM, Lorenzo Pieralisi wrote:
> >>> [+Morten, Dietmar]
> >>>
> >>> $SUBJECT should be:
> >>>
> >>> arm64: topology: rename cluster_id
> >>
> [cut]
> >>
> >> I was hoping someone else would comment here, but my take at this point is
> >> that it doesn't really matter in a functional sense at the moment.
> >> Like the chiplet discussion it can be the subject of a future patch along
> >> with the patches which tweak the scheduler to understand the split.
> >>
> >> BTW, given that i'm OoO next week, and the following that are the holidays,
> >> I don't intend to repost this for a couple weeks. I don't think there are
> >> any issues with this set.
> >>
> >>>
> >>> There is also arch/arm to take into account, again, this patch is
> >>> just renaming (as it should have named since the beginning) a
> >>> topology level but we should consider everything from a legacy
> >>> perspective.
> > 
> > arch/arm has gone for thread/core/socket for the three topology levels
> > it supports.
> > 
> > I'm not sure what short term value keeping cluster_id has? Isn't it just
> > about where we make the package = cluster assignment? Currently it is in
> > the definition of topology_physical_package_id. If we keep cluster_id
> > and add package_id, it gets moved into the MPIDR/DT parsing code. 
> > 
> > Keeping cluster_id and introducing a topology_cluster_id function could
> > help cleaning up some of the users of topology_physical_package_id that
> > currently assumes package_id == cluster_id though.
> 
> I think we still need the information describing which cores are in one cluster.
> Many arm64 chips have the architecture core/cluster/socket. Cores in one cluster may
> share a same L2 cache. That information can be used to build the sched_domain. If we put
> cores in one cluster in one sched_domain, the performance will be better.(please see
> kernel/sched/topology.c:1197, cpu_coregroup_mask() uses 'core_sibling' to build a multi-core
> sched_domain)
> So I think we still need variable to record which cores are in one sched_domain for future use.

I agree that information about clusters is useful. The problem is that
we currently treat clusters as sockets (also known as dies or
physical_package_id depending on which bit of code you are looking at). We
don't handle the core/cluster/socket topology you mention as three
'levels' of grouping of cores. We currently create one sched_domain
levels for cores (in a cluster) and a parent sched_domain level of all
clusters in the system ignoring if those are in different sockets and
tell the scheduler that those clusters are all separate sockets. This
doesn't work very well for systems with many clusters and/or multiple
sockets.

How to solve that problem is a longer discussion which I'm happy to
have. This patch is just preparing to get rid of the assumption that
clusters are sockets as this is not in line with what other
architectures does and the assumptions made by the scheduler.

Morten



More information about the linux-arm-kernel mailing list