[PATCH v4 2/9] arm64: perf: Enable pmu counter direct access for perf event on armv8
Rob Herring
robh at kernel.org
Wed Jan 6 19:17:50 EST 2021
On Wed, Dec 02, 2020 at 07:57:47AM -0700, Rob Herring wrote:
> On Fri, Nov 20, 2020 at 02:03:45PM -0600, Rob Herring wrote:
> > On Thu, Nov 19, 2020 at 07:15:15PM +0000, Will Deacon wrote:
> > > On Fri, Nov 13, 2020 at 06:06:33PM +0000, Mark Rutland wrote:
> > > > On Thu, Oct 01, 2020 at 09:01:09AM -0500, Rob Herring wrote:
> > > > > +static void armv8pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm)
> > > > > +{
> > > > > + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR))
> > > > > + return;
> > > > > +
> > > > > + if (atomic_dec_and_test(&mm->context.pmu_direct_access))
> > > > > + on_each_cpu_mask(mm_cpumask(mm), refresh_pmuserenr, NULL, 1);
> > > > > +}
Bump on this again... :)
> > > >
> > > > I didn't think we kept our mm_cpumask() up-to-date in all cases on
> > > > arm64, so I'm not sure we can use it like this.
> > > >
> > > > Will, can you confirm either way?
> > >
> > > We don't update mm_cpumask() as the cost of the atomic showed up in some
> > > benchmarks I did years ago and we've never had any need for the thing anyway
> > > because out TLB invalidation is one or all.
> >
> > That's good because we're also passing NULL instead of mm which would
> > crash. So it must be more than it's not up to date, but it's always 0.
> > It looks like event_mapped on x86 uses mm_cpumask(mm) which I guess was
> > dropped when copying this code as it didn't work... For reference, the
> > x86 version of this originated in commit 7911d3f7af14a6.
> >
> > I'm not clear on why we need to update pmuserenr_el0 here anyways. To
> > get here userspace has to mmap the event and then unmmap it. If we did
> > nothing, then counter accesses would not fault until the next context
> > switch.
Okay, I've come up with a test case where I can trigger this. It's a bit
convoluted IMO where the thread doing the mmap is different thread from
reading the counter. Seems like it would be better if we just disabled
user access if we're not doing calling thread monitoring. We could
always loosen that restriction later. x86 OTOH was wide open with access
globally enabled and this hunk of code was part of restricting it some.
> >
> > If you all have any ideas, I'm all ears. I'm not a scheduler nor perf
> > hacker. ;)
>
> Mark, Will, any thoughts on this?
Any reason, this would not work:
static void refresh_pmuserenr(void *mm)
{
if (mm == current->active_mm)
perf_switch_user_access(mm);
}
The downside is we'd be doing an IPI on *all* cores for a PMU, not just
the ones in mm_cpumask() (if that was accurate). But this isn't a fast
path.
Rob
More information about the linux-arm-kernel
mailing list