[RFC PATCH 1/2] perf: arm_spe: Fix consistency of PMSCR register bit CX
Leo Yan
leo.yan at linaro.org
Tue Feb 8 05:00:47 PST 2022
Hi German,
On Mon, Feb 07, 2022 at 12:06:14PM +0000, German Gomez wrote:
[...]
> > I reviewed the code and also traced the backtrace for the function
> > arm_spe_pmu_start(), I can confirm that every time perf session will
> > execute below flow:
> >
> > evlist__enable()
> > __evlist__enable()
> > evlist__for_each_cpu() { -> call affinity__set()
> > evsel__enable_cpu()
> > }
> >
> > We can see the macro evlist__for_each_cpu() will extend to invoke
> > evlist__cpu_begin() and affinity__set(); affinity__set() will set CPU
> > affinity to the target CPU, thus perf process will firstly migrate to
> > the target CPU and enable event on the target CPU. This means perf
> > will not send remote IPI and it directly runs on target CPU, and the
> > dd program will not interfere capabilities for perf session.
>
> Thank you for looking at this,
>
> I re-tested on the N1SDP (previously I was using a graviton2 instance).
> I had to adjust the command slightly with "-m,2" to get it consistently
> this time:
>
> $ taskset --cpu-list 0 sudo dd if=/dev/random of=/dev/null &
> $ perf record -e arm_spe_0// -C0 -m,2 -- sleep 1
> $ perf report -D | grep CONTEXT | head
> . 0000000e: 65 b5 6e 00 00 CONTEXT 0x6eb5 el2
> . 0000004e: 65 b5 6e 00 00 CONTEXT 0x6eb5 el2
> . 0000008e: 65 b5 6e 00 00 CONTEXT 0x6eb5 el2
> [...]
Indeed! I can reproduce the issue now. And I can capture backtrace
for arm_spe_pmu_start() with below commands:
# cd /home/leoy/linux/tools/perf
# ./perf probe --add "arm_spe_pmu_start" -s /home/leoy/linux/ -k /home/leoy/linux/vmlinux
# echo 1 > /sys/kernel/debug/tracing/events/probe/arm_spe_pmu_start/enable
# echo stacktrace > /sys/kernel/debug/tracing/events/probe/arm_spe_pmu_start/trigger
... run your commands with non-root user ...
# cat /sys/kernel/debug/tracing/trace
dd-7697 [000] d.h2. 506.068700: arm_spe_pmu_start: (arm_spe_pmu_start+0x8/0xe0)
dd-7697 [000] d.h3. 506.068701: <stack trace>
=> kprobe_dispatcher
=> kprobe_breakpoint_handler
=> call_break_hook
=> brk_handler
=> do_debug_exception
=> el1_dbg
=> el1h_64_sync_handler
=> el1h_64_sync
=> arm_spe_pmu_start
=> event_sched_in.isra.0
=> merge_sched_in
=> visit_groups_merge.constprop.0
=> ctx_sched_in
=> perf_event_sched_in
=> ctx_resched
=> __perf_event_enable
=> event_function
=> remote_function
=> flush_smp_call_function_queue
=> generic_smp_call_function_single_interrupt
=> ipi_handler
=> handle_percpu_devid_irq
=> generic_handle_domain_irq
=> gic_handle_irq
=> call_on_irq_stack
=> do_interrupt_handler
=> el1_interrupt
=> el1h_64_irq_handler
=> el1h_64_irq
=> _raw_spin_unlock_irqrestore
=> urandom_read_nowarn.isra.0
=> random_read
=> vfs_read
=> ksys_read
=> __arm64_sys_read
=> invoke_syscall
=> el0_svc_common.constprop.0
=> do_el0_svc
=> el0_svc
=> el0t_64_sync_handler
=> el0t_64_sync
The backtrace clearly shows the function arm_spe_pmu_start() is
invoked in the 'dd' process (dd-7697); the flow is:
- perf program sends IPI to CPU0;
- 'dd' process is running on CPU0 and it's interrupted to handle IPI;
- 'dd' process has root capabilities, so it will enable context
tracing for non-root perf session.
> >> One way to fix this is by caching the value of the CX bit during the
> >> initialization of the PMU event, so that it remains consistent for the
> >> duration of the session.
> >>
> >> [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/perf/arm_spe_pmu.c?h=v5.16#n275
> >> [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/perf/arm_spe_pmu.c?h=v5.16#n713
> >> [3]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/perf/arm_spe_pmu.c?h=v5.16#n751
> >>
> >> Signed-off-by: German Gomez <german.gomez at arm.com>
> >> ---
> >> drivers/perf/arm_spe_pmu.c | 6 ++++--
> >> 1 file changed, 4 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
> >> index d44bcc29d..8515bf85c 100644
> >> --- a/drivers/perf/arm_spe_pmu.c
> >> +++ b/drivers/perf/arm_spe_pmu.c
> >> @@ -57,6 +57,7 @@ struct arm_spe_pmu {
> >> u16 pmsver;
> >> u16 min_period;
> >> u16 counter_sz;
> >> + bool pmscr_cx;
So the patch makes sense to me. Just a minor comment:
Here we can define a u64 for recording pmscr value rather than a
bool value.
struct arm_spe_pmu {
...
u64 pmscr;
};
> >>
> >> #define SPE_PMU_FEAT_FILT_EVT (1UL << 0)
> >> #define SPE_PMU_FEAT_FILT_TYP (1UL << 1)
> >> @@ -260,6 +261,7 @@ static const struct attribute_group *arm_spe_pmu_attr_groups[] = {
> >> static u64 arm_spe_event_to_pmscr(struct perf_event *event)
> >> {
> >> struct perf_event_attr *attr = &event->attr;
> >> + struct arm_spe_pmu *spe_pmu = to_spe_pmu(event->pmu);
> >> u64 reg = 0;
> >>
> >> reg |= ATTR_CFG_GET_FLD(attr, ts_enable) << SYS_PMSCR_EL1_TS_SHIFT;
> >> @@ -272,7 +274,7 @@ static u64 arm_spe_event_to_pmscr(struct perf_event *event)
> >> if (!attr->exclude_kernel)
> >> reg |= BIT(SYS_PMSCR_EL1_E1SPE_SHIFT);
> >>
> >> - if (IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR) && perfmon_capable())
> >> + if (IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR) && spe_pmu->pmscr_cx)
> >> reg |= BIT(SYS_PMSCR_EL1_CX_SHIFT);
> >>
> >> return reg;
> >> @@ -709,10 +711,10 @@ static int arm_spe_pmu_event_init(struct perf_event *event)
> >> !(spe_pmu->features & SPE_PMU_FEAT_FILT_LAT))
> >> return -EOPNOTSUPP;
> >>
> >> + spe_pmu->pmscr_cx = perfmon_capable();
> >> reg = arm_spe_event_to_pmscr(event);
Thus here we can change as:
spe_pmu->pmscr = arm_spe_event_to_pmscr(event);
And then in the function arm_spe_pmu_start(), we can skip calling
arm_spe_event_to_pmscr() and directly set PMSCR register:
static void arm_spe_pmu_start(struct perf_event *event, int flags)
{
...
isb();
write_sysreg_s(spe_pmu->pmscr, SYS_PMSCR_EL1);
}
Thanks,
Leo
> >> if (!perfmon_capable() &&
> >> (reg & (BIT(SYS_PMSCR_EL1_PA_SHIFT) |
> >> - BIT(SYS_PMSCR_EL1_CX_SHIFT) |
> >> BIT(SYS_PMSCR_EL1_PCT_SHIFT))))
> >> return -EACCES;
> >>
> >> --
> >> 2.25.1
> >>
More information about the linux-arm-kernel
mailing list