[PATCH 1/2] perf: coresight_pmu: Add support for ARM CoreSight PMU driver

Robin Murphy robin.murphy at arm.com
Wed May 11 03:03:38 PDT 2022


On 2022-05-11 03:46, Besar Wicaksono wrote:
[...]
>>> +config ARM_CORESIGHT_PMU
>>> +     tristate "ARM Coresight PMU"
>>> +     depends on ARM64 && ACPI_APMT
>>
>> There shouldn't be any functional dependency on any CPU architecture here.
> 
> The spec is targeted towards ARM based system, shouldn't we explicitly limit it to ARM?

I wouldn't say so. The PMU spec does occasionally make reference to the 
Armv8-A and Armv8-M PMU architectures for comparison, but ultimately 
it's specifying an MMIO register interface for a system component. If 
3rd-party system IP vendors adopt it, who knows what kind of systems 
these PMUs might end up in? (And of course a DT binding will inevitably 
come along once the rest of the market catches up with the ACPI-focused 
early adopters)

In terms of functional dependency plus scope of practical usefulness, I 
think something like:

	depends on ACPI
	depends on ACPI_APMT || COMPILE_TEST

would probably fit the bill until DT support comes along.

[...]
>>> +/*
>>> + * Write to 64-bit register as a pair of 32-bit registers.
>>> + *
>>> + * @val     : 64-bit value to write.
>>> + * @base    : base address of page-0 or page-1 if dual-page ext. is enabled.
>>> + * @offset  : register offset.
>>> + *
>>> + */
>>> +static void write_reg64_lohi(u64 val, void __iomem *base, u32 offset)
>>> +{
>>> +     u32 val_lo, val_hi;
>>> +
>>> +     val_hi = upper_32_bits(val);
>>> +     val_lo = lower_32_bits(val);
>>> +
>>> +     write_reg32(val_lo, base, offset);
>>> +     write_reg32(val_hi, base, offset + 4);
>>> +}
>>
>> #include <linux/io-64-nonatomic-lo-hi.h>
> 
> Thanks for pointing this out. We will replace it with lo_hi_writeq.

The point is more that you can just use writeq() (and readq() where 
atomicity isn't important), and the header will make sure it works wherever.

The significance of not having 64-bit single-copy atomicity should be 
that if the processor issues a 64-bit access, the system may 
*automatically* split it into a pair of 32-bit accesses, e.g. at an 
AXI-to-APB bridge. If making a 64-bit access to a 64-bit register would 
actually fail, that's just broken.

[...]
>>> +static inline bool is_cycle_cntr_idx(const struct perf_event *event)
>>> +{
>>> +     struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
>>> pmu);
>>> +     int idx = event->hw.idx;
>>> +
>>> +     return (support_cc(coresight_pmu) && idx ==
>> CORESIGHT_PMU_IDX_CCNTR);
>>
>> If we don't support cycle counting, cycles count events should have been
>> rejected in event_init. If they're able to propagate further than that

[apologies for an editing mishap here, this should have continued "then 
something is fundamentally broken."]

> Not sure I understand, do you mean the check for cycle counter support is unnecessary ?
> This function is actually called by coresight_pmu_start, which is after event_init had passed.
> coresight_pmu_start is not aware if cycle counter is supported or not, so we need to keep checking it.

I mean that the support_cc(coresight_pmu) check should only ever need to 
happen *once* in event_init, so if standard cycles events are not 
supported then they are correctly rejected there and then. After that, 
if we see one in event_add and later, then we can simply infer that we 
*do* have a standard cycle counter and go ahead and allocate it.

>>> +}
>>> +
>>> +bool coresight_pmu_is_cc_event(const struct perf_event *event)
>>> +{
>>> +     struct coresight_pmu *coresight_pmu = to_coresight_pmu(event-
>>> pmu);
>>> +     u32 evtype = coresight_pmu->impl.ops->event_type(event);
>>> +
>>> +     return (support_cc(coresight_pmu) &&
>>
>> Ditto.
> 
> This function is called by event_init to validate the event and find available counters.

Right, but it also ends up getting called from other places like 
event_add as well. Like I say, if we're still checking whether an event 
is supported or not by that point, we're doing something wrong.

[...]>>> +/**
>>> + * This is the default event number for cycle count, if supported, since the
>>> + * ARM Coresight PMU specification does not define a standard event
>> code
>>> + * for cycle count.
>>> + */
>>> +#define CORESIGHT_PMU_EVT_CYCLES_DEFAULT (0x1ULL << 31)
>>
>> And what do we do when an implementation defines 0x80000000 as one of
>> its own event specifiers? The standard cycle count is independent of any
>> other events, so it needs to be encoded in a manner which is distinct
>> from *any* potentially-valid PMEVTYPER value.
> 
> We were thinking that in such case, the implementor would provide coresight_pmu_impl_ops.
> To avoid it, I guess we can use config[32] for the default cycle count event id.
> The filter value will need to be moved to config1[31:0].
> Does it sound reasonable ?

Sure, you can lay out the config fields however you fancy, but since the 
architecture leaves the standard cycles event independent from the 
32-bit IMP-DEF PMEVTYPER specifier, logically we need at least 33 bits 
in some form or other to encode all possible event types in our 
perf_event config.

Thanks,
Robin.



More information about the linux-arm-kernel mailing list