[PATCH 1/9] KVM: arm64: PMU: Align chained counter implementation with architecture pseudocode
Reiji Watanabe
reijiw at google.com
Thu Oct 27 07:33:34 PDT 2022
Hi Marc,
> > > +static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
> > > + unsigned long mask, u32 event)
> > > +{
> > > + int i;
> > > +
> > > + if (!kvm_vcpu_has_pmu(vcpu))
> > > + return;
> > > +
> > > + if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> > > + return;
> > > +
> > > + /* Weed out disabled counters */
> > > + mask &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
> > > +
> > > + for_each_set_bit(i, &mask, ARMV8_PMU_CYCLE_IDX) {
> > > + u64 type, reg;
> > > +
> > > + /* Filter on event type */
> > > + type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
> > > + type &= kvm_pmu_event_mask(vcpu->kvm);
> > > + if (type != event)
> > > + continue;
> > > +
> > > + /* Increment this counter */
> > > + reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
> > > + reg = lower_32_bits(reg);
> > > + __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
> > > +
> > > + if (reg) /* No overflow? move on */
> > > + continue;
> > > +
> > > + /* Mark overflow */
> > > + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
> >
> > Perhaps it might be useful to create another helper that takes
> > care of just one counter (it would essentially do the code above
> > in the loop). The helper could be used (in addition to the above
> > loop) from the code below for the CHAIN event case and from
> > kvm_pmu_perf_overflow(). Then unnecessary execution of
> > for_each_set_bit() could be avoided for these two cases.
>
> I'm not sure it really helps. We would still need to check whether the
> counter is enabled, and we'd need to bring that into the helper
> instead of keeping it outside of the loop.
That's true. It seems that I overlooked that.
Although it appears checking with kvm_vcpu_has_pmu() is unnecessary
(redundant), the check with PMCR_EL0.E is necessary.
Thank you,
Reiji
More information about the linux-arm-kernel
mailing list