[PATCH v6 02/10] arm64: perf: Enable PMU counter direct access for perf event
Rob Herring
robh at kernel.org
Tue Mar 30 22:08:11 BST 2021
On Tue, Mar 30, 2021 at 12:09 PM Rob Herring <robh at kernel.org> wrote:
>
> On Tue, Mar 30, 2021 at 10:31 AM Will Deacon <will at kernel.org> wrote:
> >
> > On Wed, Mar 10, 2021 at 05:08:29PM -0700, Rob Herring wrote:
> > > From: Raphael Gault <raphael.gault at arm.com>
> > >
> > > Keep track of event opened with direct access to the hardware counters
> > > and modify permissions while they are open.
> > >
> > > The strategy used here is the same which x86 uses: every time an event
> > > is mapped, the permissions are set if required. The atomic field added
> > > in the mm_context helps keep track of the different event opened and
> > > de-activate the permissions when all are unmapped.
> > > We also need to update the permissions in the context switch code so
> > > that tasks keep the right permissions.
> > >
> > > In order to enable 64-bit counters for userspace when available, a new
> > > config1 bit is added for userspace to indicate it wants userspace counter
> > > access. This bit allows the kernel to decide if chaining should be
> > > disabled and chaining and userspace access are incompatible.
> > > The modes for config1 are as follows:
> > >
> > > config1 = 0 or 2 : user access enabled and always 32-bit
> > > config1 = 1 : user access disabled and always 64-bit (using chaining if needed)
> > > config1 = 3 : user access enabled and counter size matches underlying counter.
[...]
> > > @@ -980,9 +1032,23 @@ static int __armv8_pmuv3_map_event(struct perf_event *event,
> > > &armv8_pmuv3_perf_cache_map,
> > > ARMV8_PMU_EVTYPE_EVENT);
> > >
> > > - if (armv8pmu_event_is_64bit(event))
> > > + if (armv8pmu_event_want_user_access(event) || !armv8pmu_event_is_64bit(event)) {
> > > + event->hw.flags |= ARMPMU_EL0_RD_CNTR;
> >
> > Why do you set this for all 32-bit events?
>
> It goes back to the config1 bits as explained in the commit msg. We
> can always support user access for 32-bit counters, but for 64-bit
> counters the user has to request both user access and 64-bit counters.
> We could require explicit user access request for 32-bit access, but I
> thought it was better to not require userspace to do something Arm
> specific on open.
>
> > The logic here feels like it
> > could with a bit of untangling.
>
> Yes, I don't love it, but couldn't come up with anything better. It is
> complicated by the fact that flags have to be set before we assign the
> counter and can't set/change them when we assign the counter. It would
> take a lot of refactoring with armpmu code to fix that.
How's this instead?:
if (armv8pmu_event_want_user_access(event) || !armv8pmu_event_is_64bit(event))
event->hw.flags |= ARMPMU_EL0_RD_CNTR;
/*
* At this point, the counter is not assigned. If a 64-bit counter is
* requested, we must make sure the h/w has 64-bit counters if we set
* the event size to 64-bit because chaining is not supported with
* userspace access. This may still fail later on if the CPU cycle
* counter is in use.
*/
if (armv8pmu_event_is_64bit(event) &&
(!armv8pmu_event_want_user_access(event) ||
armv8pmu_has_long_event(cpu_pmu) || (hw_event_id ==
ARMV8_PMUV3_PERFCTR_CPU_CYCLES)))
event->hw.flags |= ARMPMU_EVT_64BIT;
> > > + /*
> > > + * At this point, the counter is not assigned. If a 64-bit
> > > + * counter is requested, we must make sure the h/w has 64-bit
> > > + * counters if we set the event size to 64-bit because chaining
> > > + * is not supported with userspace access. This may still fail
> > > + * later on if the CPU cycle counter is in use.
> > > + */
> > > + if (armv8pmu_event_is_64bit(event) &&
> > > + (armv8pmu_has_long_event(armpmu) ||
> > > + hw_event_id == ARMV8_PMUV3_PERFCTR_CPU_CYCLES))
> > > + event->hw.flags |= ARMPMU_EVT_64BIT;
> > > + } else if (armv8pmu_event_is_64bit(event))
> > > event->hw.flags |= ARMPMU_EVT_64BIT;
More information about the linux-arm-kernel
mailing list