[PATCH v3 02/14] KVM: arm64: PMU: Align chained counter implementation with architecture pseudocode
Reiji Watanabe
reijiw at google.com
Sat Nov 12 09:11:42 PST 2022
Hi Marc,
> > > +#define PERF_ATTR_CFG1_COUNTER_64BIT BIT(0)
> >
> > Although this isn't the new code (but just a name change),
> > wouldn't it be nicer to have armv8pmu_event_is_64bit()
> > (in arch/arm64/kernel/perf_event.c) use the macro as well ?
>
> We tried that in the past, and the amount of churn wasn't really worth
> it. I'm happy to revisit this in the future, but probably as a
> separate patch.
I see... Thank you for the clarification. As this isn't new,
I agree with that (not addressing it in this series).
> > > @@ -340,11 +245,8 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
> > >
> > > pmc = &pmu->pmc[i];
> > >
> > > - /* A change in the enable state may affect the chain state */
> > > - kvm_pmu_update_pmc_chained(vcpu, i);
> > > kvm_pmu_create_perf_event(vcpu, i);
> > >
> > > - /* At this point, pmc must be the canonical */
> > > if (pmc->perf_event) {
> > > perf_event_enable(pmc->perf_event);
> > > if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > > @@ -375,11 +277,8 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
> > >
> > > pmc = &pmu->pmc[i];
> > >
> > > - /* A change in the enable state may affect the chain state */
> > > - kvm_pmu_update_pmc_chained(vcpu, i);
> > > kvm_pmu_create_perf_event(vcpu, i);
> >
> > Do we still need to call kvm_pmu_update_pmc_chained() here even
> > with this patch ? (I would think the reason why the function was
> > called here was because the chain state change could affect the
> > backed perf event attribute before).
> > I have the same comment for kvm_pmu_enable_counter_mask().
>
> Do you mean kvm_pmu_create_perf_event() instead? I think we can drop
> the one on disable. But the one on enable is required, as we need to
> be able to start counting an event even if the guest hasn't programmed
> the event number (unlikely, but allowed by the architecture). It can
> be made conditional though.
I'm so sorry for the confusion...
Yes, kvm_pmu_create_perf_event() is what I meant.
Thank you for the explanation for the one enabled case.
Now, I understand.
>
> I have the following fix queued:
Looks good!
Thank you,
Reiji
>
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index 26293f842b0f..b7a5f75d008d 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -277,9 +277,9 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
>
> pmc = &pmu->pmc[i];
>
> - kvm_pmu_create_perf_event(vcpu, i);
> -
> - if (pmc->perf_event) {
> + if (!pmc->perf_event) {
> + kvm_pmu_create_perf_event(vcpu, i);
> + } else {
> perf_event_enable(pmc->perf_event);
> if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> kvm_debug("fail to enable perf event\n");
> @@ -309,8 +309,6 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
>
> pmc = &pmu->pmc[i];
>
> - kvm_pmu_create_perf_event(vcpu, i);
> -
> if (pmc->perf_event)
> perf_event_disable(pmc->perf_event);
> }
More information about the linux-arm-kernel
mailing list