[PATCH v2 00/12] coresight: Add CPU cluster funnel/replicator/tmc support

yuanfang zhang yuanfang.zhang at oss.qualcomm.com
Thu Dec 18 18:13:14 PST 2025



On 12/18/2025 7:33 PM, Sudeep Holla wrote:
> On Thu, Dec 18, 2025 at 12:09:40AM -0800, Yuanfang Zhang wrote:
>> This patch series adds support for CoreSight components local to CPU clusters,
>> including funnel, replicator, and TMC, which reside within CPU cluster power
>> domains. These components require special handling due to power domain
>> constraints.
>>
> 
> Could you clarify why PSCI-based power domains associated with clusters in
> domain-idle-states cannot address these requirements, given that PSCI CPU-idle
> OSI mode was originally intended to support them? My understanding of this
> patch series is that OSI mode is unable to do so, which, if accurate, appears
> to be a flaw that should be corrected.

It is due to the particular characteristics of the CPU cluster power domain.Runtime PM for CPU devices works little different, it is mostly used to manage hierarchical
CPU topology (PSCI OSI mode) to talk with genpd framework to manage the last CPU handling in
cluster.
It doesn’t really send IPI to wakeup CPU device (It don’t have .power_on/.power_off) callback
implemented which gets invoked from .runtime_resume callback. This behavior is aligned with
the upstream Kernel.

> 
>> Unlike system-level CoreSight devices, these components share the CPU cluster's
>> power domain. When the cluster enters low-power mode (LPM), their registers
>> become inaccessible. Notably, `pm_runtime_get` alone cannot bring the cluster
>> out of LPM, making standard register access unreliable.
>>
> 
> Are these devices the only ones on the system that are uniquely bound to
> cluster-level power domains? If not, what additional devices share this
> dependency so that we can understand how they are managed in comparison?
> 

Yes, devices like ETM and TRBE also share this power domain and access constraint.
Their drivers naturally handle enablement/disablement on the specific CPU they
belong to (e.g., via hotplug callbacks or existing smp_call_function paths).
>> To address this, the series introduces:
>> - Identifying cluster-bound devices via a new `qcom,cpu-bound-components`
>>   device tree property.
> 
> Really, no please.
> 

Our objective is to determine which CoreSight components are physically locate
within the CPU cluster power domain. 
Would it be acceptable to derive this relationship from the existing power-domains binding?
For example, if a Funnel or Replicator node is linked to a power-domains entry that specifies a cpumask,
the driver could recognize this shared dependency and automatically apply the appropriate cluster-aware behavior.

thanks,
yuanfang.



More information about the linux-arm-kernel mailing list