[PATCH v1 0/4] Prefer sysfs/JSON events also when no PMU is provided

Atish Kumar Patra atishp at rivosinc.com
Fri Nov 8 14:06:22 PST 2024


On Fri, Nov 8, 2024 at 11:00 AM Ian Rogers <irogers at google.com> wrote:
>
> On Fri, Nov 8, 2024 at 10:38 AM Atish Kumar Patra <atishp at rivosinc.com> wrote:
> >
> > On Fri, Nov 8, 2024 at 4:16 AM James Clark <james.clark at linaro.org> wrote:
> > >
> > >
> > >
> > > On 07/11/2024 18:51, Ian Rogers wrote:
> > > > On Sat, Oct 26, 2024 at 5:18 AM Ian Rogers <irogers at google.com> wrote:
> > > >>
> > > >> At the RISC-V summit the topic of avoiding event data being in the
> > > >> RISC-V PMU kernel driver came up. There is a preference for sysfs/JSON
> > > >> events being the priority when no PMU is provided so that legacy
> > > >> events maybe supported via json. Originally Mark Rutland also
> > > >> expressed at LPC 2023 that doing this would resolve bugs on ARM Apple
> > > >> M? processors, but James Clark more recently tested this and believes
> > > >> the driver issues there may not have existed or have been resolved. In
> > > >> any case, it is inconsistent that with a PMU event names avoid legacy
> > > >> encodings, but when wildcarding PMUs (ie without a PMU with the event
> > > >> name) the legacy encodings have priority.
> > > >>
> > > >> The patch doing this work was reverted in a v6.10 release candidate
> > > >> as, even though the patch was posted for weeks and had been on
> > > >> linux-next for weeks without issue, Linus was in the habit of using
> > > >> explicit legacy events with unsupported precision options on his
> > > >> Neoverse-N1. This machine has SLC PMU events for bus and CPU cycles
> > > >> where ARM decided to call the events bus_cycles and cycles, the latter
> > > >> being also a legacy event name. ARM haven't renamed the cycles event
> > > >> to a more consistent cpu_cycles and avoided the problem. With these
> > > >> changes the problematic event will now be skipped, a large warning
> > > >> produced, and perf record will continue for the other PMU events. This
> > > >> solution was proposed by Arnaldo.
> > > >>
> > > >> Two minor changes have been added to help with the error message and
> > > >> to work around issues occurring with "perf stat metrics (shadow stat)
> > > >> test".
> > > >>
> > > >> The patches have only been tested on my x86 non-hybrid laptop.
> > > >
> > > > Hi Atish and James,
> > > >
> > > > Could I get your tags for this series?
> > > >
> >
> > Hi Ian,
> > Thanks for your patches. It definitely helps to have a clean slate
> > implementation
> > for the perf tool. However, I have some open questions about other use cases
> > that we discussed during the RVI Summit.
>
> Thanks Atish, could I get your acked/reviewed/tested tags?
>

Sure. I will finish the testing and send those.

> Ian
>
> > > > The patches were originally motivated by wanting to make the behavior
> > > > of events parsed like "cycles" match that of "cpu/cycles/", the PMU is
> > > > wildcarded to "cpu" in the first case. This was divergent because of
> > > > ARM we switched from preferring legacy (type = PERF_TYPE_HARDWARE,
> > > > config = PERF_COUNT_HW_CPU_CYCLES) to sysfs/json (type=<core PMU's
> > > > type>, config=<encoding from event>) when a PMU name was given. This
> > > > aligns with RISC-V wanting to use json encodings to avoid complexity
> > > > in the PMU driver.
> > > >
> > >
> > > I couldn't find the thread, but I remember fairly recently it was
> > > mentioned that RISC-V would be supporting the legacy events after all,
> > > maybe it was a comment from Atish? I'm not sure if that changes the
> > > requirements for this or not?
> > >
> > > I still can't really imagine how tooling would work if every tool has to
> > > maintain the mappings of basic events like instructions and branches.
> > > For example all the perf_event_open tests in ltp use the legacy events.
> > >
> >
> > No it has not changed. While this series helps to avoid clunky vendor
> > specific encodings
> > in the driver for perf tool, I am still unsure how we will manage
> > other applications
> > (directly passing legacy events through perf_event_open or
> > perf_evlist__open) will work.
> >
> > I have only anecdotal data about folks relying perf legacy events
> > directly to profile
> > their application. All of them use mostly cycle/instruction events though.
> > Are there any users who use other legacy events directly without perf tool ?
> >

+ Michale from tensorrent who was suggesting that they use the direct perf calls
in their profiling application.

@Michale : Do you have more details of direct usage of perf legacy
events to profile your application ?

> > If not, we may have only cycle/instruction mapping in the driver and
> > rely on json for everything else.
> > The other use case is virtualization. I have been playing with these
> > patches to find a clean solution to
> > enable all the use cases. If you have any other ideas, please let me know.
> >

@Ian

Any thoughts on this ? Let me explain the hypervisor use case a little bit more.
RISC-V KVM relies on SBI PMU[1] (equivalent to hypercall in x86 or HVC in ARM).
As the RISC-V ISA doesn't have any event encodings, the SBI PMU
defines a standard set corresponding to
each perf legacy event. When a guest tries to allocate a counter for
an event, it makes an SBI call (CFG_MATCH)
with SBI event encodings (matching perf legacy) or a raw event
encoding. The host kernel allocates a virtual counter
and programs the corresponding event CSRs and enables the counter.

There are two possible approaches to support it.

1. The guest OS has the correct version of the perf tool with the json
file that provides the encoding of the events supported by
the running host. The guest OS passes the exact encoding of the event
during the CFG_MATCH SBI call as a raw event and the host programs
the event CSR.  It is a much simpler scheme and less management on the
host side. But the perf tool on guests has to pass any perf legacy
events
as raw events to the driver instead of PERF_HARDWARE/CACHE or indicate
event encoding is coming from json in some other way.

The other issue is that VMM can not modify the vendorid,implid,archid
shown to the guest (the default is the same as the host).
Migration across CPU generation or vendors won't be possible if perf
in use. This may not be an issue as VM migration across CPU
Generations are not a common thing.

2. The guest OS driver always relies on the SBI PMU event encoding
(i.e perf legacy event) which the host can translate to the event
encoding the hardware
supports if it is baked into the driver. The obvious downside is the
vendor specific encodings in the driver which we are trying to avoid.

[1] https://github.com/riscv-non-isa/riscv-sbi-doc/blob/master/src/ext-pmu.adoc
> > > And wouldn't porting existing software to RISC-V would be an issue if it
> > > doesn't behave in a similar way to what's there already?
> > >
> > > > James, could you show the neoverse with the cmn PMU behavior for perf
> > > > record of "cycles:pp" due to sensitivities there.
> > > >
> > >
> > > Yep I can check this on Monday.
> > >
> > > > Thanks,
> > > > Ian
> > > >
> > >
> > >
> > > >
> > > >
> > > >
> > > >> Ian Rogers (4):
> > > >>    perf evsel: Add pmu_name helper
> > > >>    perf stat: Fix find_stat for mixed legacy/non-legacy events
> > > >>    perf record: Skip don't fail for events that don't open
> > > >>    perf parse-events: Reapply "Prefer sysfs/JSON hardware events over
> > > >>      legacy"
> > > >>
> > > >>   tools/perf/builtin-record.c    | 22 +++++++---
> > > >>   tools/perf/util/evsel.c        | 10 +++++
> > > >>   tools/perf/util/evsel.h        |  1 +
> > > >>   tools/perf/util/parse-events.c | 26 +++++++++---
> > > >>   tools/perf/util/parse-events.l | 76 +++++++++++++++++-----------------
> > > >>   tools/perf/util/parse-events.y | 60 ++++++++++++++++++---------
> > > >>   tools/perf/util/pmus.c         | 20 +++++++--
> > > >>   tools/perf/util/stat-shadow.c  |  3 +-
> > > >>   8 files changed, 145 insertions(+), 73 deletions(-)
> > > >>
> > > >> --
> > > >> 2.47.0.163.g1226f6d8fa-goog
> > > >>



More information about the linux-riscv mailing list