[PATCH V11 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
Suzuki K Poulose
suzuki.poulose at arm.com
Thu Jun 8 03:13:24 PDT 2023
On 06/06/2023 11:34, Anshuman Khandual wrote:
>
>
> On 6/5/23 17:35, Mark Rutland wrote:
>> On Wed, May 31, 2023 at 09:34:23AM +0530, Anshuman Khandual wrote:
>>> This enables support for branch stack sampling event in ARMV8 PMU, checking
>>> has_branch_stack() on the event inside 'struct arm_pmu' callbacks. Although
>>> these branch stack helpers armv8pmu_branch_XXXXX() are just dummy functions
>>> for now. While here, this also defines arm_pmu's sched_task() callback with
>>> armv8pmu_sched_task(), which resets the branch record buffer on a sched_in.
>>
>> This generally looks good, but I have a few comments below.
>>
>> [...]
>>
>>> +static inline bool armv8pmu_branch_valid(struct perf_event *event)
>>> +{
>>> + WARN_ON_ONCE(!has_branch_stack(event));
>>> + return false;
>>> +}
>>
>> IIUC this is for validating the attr, so could we please name this
>> armv8pmu_branch_attr_valid() ?
>
> Sure, will change the name and updated call sites.
>
>>
>> [...]
>>
>>> +static int branch_records_alloc(struct arm_pmu *armpmu)
>>> +{
>>> + struct pmu_hw_events *events;
>>> + int cpu;
>>> +
>>> + for_each_possible_cpu(cpu) {
Shouldn't this be supported_pmus ? i.e.
for_each_cpu(cpu, &armpmu->supported_cpus) {
>>> + events = per_cpu_ptr(armpmu->hw_events, cpu);
>>> + events->branches = kzalloc(sizeof(struct branch_records), GFP_KERNEL);
>>> + if (!events->branches)
>>> + return -ENOMEM;
Do we need to free the allocated branches already ?
>>> + }
May be:
int ret = 0;
for_each_cpu(cpu, &armpmu->supported_cpus) {
events = per_cpu_ptr(armpmu->hw_events, cpu);
events->branches = kzalloc(sizeof(struct branch_records), GFP_KERNEL);
if (!events->branches) {
ret = -ENOMEM;
break;
}
}
if (!ret)
return 0;
for_each_cpu(cpu, &armpmu->supported_cpus) {
events = per_cpu_ptr(armpmu->hw_events, cpu);
if (!events->branches)
break;
kfree(events->branches);
}
return ret;
>>> + return 0;
>>
>> This leaks memory if any allocation fails, and the next patch replaces this
>> code entirely.
>
> Okay.
>
>>
>> Please add this once in a working state. Either use the percpu allocation
>> trick in the next patch from the start, or have this kzalloc() with a
>> corresponding kfree() in an error path.
>
> I will change branch_records_alloc() as suggested in the next patch's thread
> and fold those changes here in this patch.
>
>>
>>> }
>>>
>>> static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>>> @@ -1145,12 +1162,24 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>>> };
>>> int ret;
>>>
>>> + ret = armv8pmu_private_alloc(cpu_pmu);
>>> + if (ret)
>>> + return ret;
>>> +
>>> ret = smp_call_function_any(&cpu_pmu->supported_cpus,
>>> __armv8pmu_probe_pmu,
>>> &probe, 1);
>>> if (ret)
>>> return ret;
>>>
>>> + if (arm_pmu_branch_stack_supported(cpu_pmu)) {
>>> + ret = branch_records_alloc(cpu_pmu);
>>> + if (ret)
>>> + return ret;
>>> + } else {
>>> + armv8pmu_private_free(cpu_pmu);
>>> + }
>>
>> I see from the next patch that "private" is four ints, so please just add that
>> to struct arm_pmu under an ifdef CONFIG_ARM64_BRBE. That'll simplify this, and
>> if we end up needing more space in future we can consider factoring it out.
>
> struct arm_pmu {
> ........................................
> /* Implementation specific attributes */
> void *private;
> }
>
> private pointer here creates an abstraction for given pmu implementation
> to hide attribute details without making it known to core arm pmu layer.
> Although adding ifdef CONFIG_ARM64_BRBE solves the problem as mentioned
> above, it does break that abstraction. Currently arm_pmu layer is aware
> about 'branch records' but not about BRBE in particular which the driver
> adds later on. I suggest we should not break that abstraction.
>
> Instead a global 'static struct brbe_hw_attr' in drivers/perf/arm_brbe.c
> can be initialized into arm_pmu->private during armv8pmu_branch_probe(),
> which will also solve the allocation-free problem. Also similar helpers
> armv8pmu_task_ctx_alloc()/free() could be defined to manage task context
> cache i.e arm_pmu->pmu.task_ctx_cache independently.
>
> But now armv8pmu_task_ctx_alloc() can be called after pmu probe confirms
> to have arm_pmu->has_branch_stack.
>
>>
>>> +
>>> return probe.present ? 0 : -ENODEV;
>>> }
>>
>> It also seems odd to ceck probe.present *after* checking
>> arm_pmu_branch_stack_supported().
>
> I will reorganize as suggested below.
>
>>
>> With the allocation removed I think this can be written more clearly as:
>>
>> | static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>> | {
>> | struct armv8pmu_probe_info probe = {
>> | .pmu = cpu_pmu,
>> | .present = false,
>> | };
>> | int ret;
>> |
>> | ret = smp_call_function_any(&cpu_pmu->supported_cpus,
>> | __armv8pmu_probe_pmu,
>> | &probe, 1);
>> | if (ret)
>> | return ret; > |
>> | if (!probe.present)
>> | return -ENODEV;
>> |
>> | if (arm_pmu_branch_stack_supported(cpu_pmu))
>> | ret = branch_records_alloc(cpu_pmu);
>> |
>> | return ret;
>> | }
Could we not simplify this as below and keep the abstraction, since we
already have it ?
>> | static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>> | {
>> | struct armv8pmu_probe_info probe = {
>> | .pmu = cpu_pmu,
>> | .present = false,
>> | };
>> | int ret;
>> |
>> | ret = smp_call_function_any(&cpu_pmu->supported_cpus,
>> | __armv8pmu_probe_pmu,
>> | &probe, 1);
>> | if (ret)
>> | return ret;
>> | if (!probe.present)
>> | return -ENODEV;
>> |
>> | if (!arm_pmu_branch_stack_supported(cpu_pmu))
>> | return 0;
>> |
>> | ret = armv8pmu_private_alloc(cpu_pmu);
>> | if (ret)
>> | return ret;
>> |
>> | ret = branch_records_alloc(cpu_pmu);
>> | if (ret)
>> | armv8pmu_private_free(cpu_pmu);
>> |
>> | return ret;
>> | }
Suzuki
More information about the linux-arm-kernel
mailing list