[PATCH v2 00/12] coresight: Add CPU cluster funnel/replicator/tmc support
Sudeep Holla
sudeep.holla at arm.com
Fri Dec 19 02:21:24 PST 2025
On Fri, Dec 19, 2025 at 10:13:14AM +0800, yuanfang zhang wrote:
>
>
> On 12/18/2025 7:33 PM, Sudeep Holla wrote:
> > On Thu, Dec 18, 2025 at 12:09:40AM -0800, Yuanfang Zhang wrote:
> >> This patch series adds support for CoreSight components local to CPU clusters,
> >> including funnel, replicator, and TMC, which reside within CPU cluster power
> >> domains. These components require special handling due to power domain
> >> constraints.
> >>
> >
> > Could you clarify why PSCI-based power domains associated with clusters in
> > domain-idle-states cannot address these requirements, given that PSCI CPU-idle
> > OSI mode was originally intended to support them? My understanding of this
> > patch series is that OSI mode is unable to do so, which, if accurate, appears
> > to be a flaw that should be corrected.
>
> It is due to the particular characteristics of the CPU cluster power
> domain.Runtime PM for CPU devices works little different, it is mostly used
> to manage hierarchicalCPU topology (PSCI OSI mode) to talk with genpd
> framework to manage the last CPU handling in cluster.
That is indeed the intended design. Could you clarify which specific
characteristics differentiate it here?
> It doesn’t really send IPI to wakeup CPU device (It don’t have
> .power_on/.power_off) callback implemented which gets invoked from
> .runtime_resume callback. This behavior is aligned with the upstream Kernel.
>
I am quite lost here. Why is it necessary to wake up the CPU? If I understand
correctly, all of this complexity is meant to ensure that the cluster power
domain is enabled before any of the funnel registers are accessed. Is that
correct?
If so, and if the cluster domains are already defined as the power domains for
these funnel devices, then they should be requested to power on automatically
before any register access occurs. Is that not the case?
What am I missing in this reasoning?
The only explanation I can see is that the firmware does not properly honor
power-domain requests coming directly from the OS. I believe that may be the
case, but I would be glad to be proven wrong.
> >
> >> Unlike system-level CoreSight devices, these components share the CPU cluster's
> >> power domain. When the cluster enters low-power mode (LPM), their registers
> >> become inaccessible. Notably, `pm_runtime_get` alone cannot bring the cluster
> >> out of LPM, making standard register access unreliable.
> >>
> >
> > Are these devices the only ones on the system that are uniquely bound to
> > cluster-level power domains? If not, what additional devices share this
> > dependency so that we can understand how they are managed in comparison?
> >
>
> Yes, devices like ETM and TRBE also share this power domain and access constraint.
> Their drivers naturally handle enablement/disablement on the specific CPU they
> belong to (e.g., via hotplug callbacks or existing smp_call_function paths).
I understand many things are possible to implement, but the key question
remains: why doesn’t the existing OSI mechanism - added specifically to cover
cases like this solve the problem today?
Especially on platforms with OSI enabled, what concrete limitation forces us
into this additional “wake-up” path instead of relying on OSI to manage the
dependency/power sequencing?
> >> To address this, the series introduces:
> >> - Identifying cluster-bound devices via a new `qcom,cpu-bound-components`
> >> device tree property.
> >
> > Really, no please.
> >
>
> Our objective is to determine which CoreSight components are physically locate
> within the CPU cluster power domain.
>
> Would it be acceptable to derive this relationship from the existing
> power-domains binding?
In my opinion, this is not merely a possibility but an explicit expectation.
> For example, if a Funnel or Replicator node is linked to a power-domains
> entry that specifies a cpumask, the driver could recognize this shared
> dependency and automatically apply the appropriate cluster-aware behavior.
>
Sure, but for the driver to use that information, we need clear explanation
for all the questions above. In short, why it is not working with the existing
PSCI domain idle support.
--
Regards,
Sudeep
More information about the linux-arm-kernel
mailing list