[PATCH v1 1/2] KVM: arm64: PMU: Restore the host's PMUSERENR_EL0

Reiji Watanabe reijiw at google.com
Wed Mar 29 09:28:56 PDT 2023


Hi Marc,

On Wed, Mar 29, 2023 at 08:31:24AM +0100, Marc Zyngier wrote:
> On Wed, 29 Mar 2023 01:21:35 +0100,
> Reiji Watanabe <reijiw at google.com> wrote:
> >
> > Restore the host's PMUSERENR_EL0 value instead of clearing it,
> > before returning back to userspace, as the host's EL0 might have
> > a direct access to PMU registers (some bits of PMUSERENR_EL0
> > might not be zero).
> >
> > Fixes: 83a7a4d643d3 ("arm64: perf: Enable PMU counter userspace access for perf event")
> > Signed-off-by: Reiji Watanabe <reijiw at google.com>
> > ---
> >  arch/arm64/include/asm/kvm_host.h       | 3 +++
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 3 ++-
> >  2 files changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index bcd774d74f34..82220ecec10e 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -544,6 +544,9 @@ struct kvm_vcpu_arch {
> >
> >     /* Per-vcpu CCSIDR override or NULL */
> >     u32 *ccsidr;
> > +
> > +   /* the value of host's pmuserenr_el0 before guest entry */
> > +   u64     host_pmuserenr_el0;
>
> I don't think we need this in each and every vcpu. Why can't this be
> placed in struct kvm_host_data and accessed via the per-cpu pointer?
> Maybe even use the PMUSERNR_EL0 field in the sysreg array?

Thank you for the nice suggestion.
Indeed, that would be better.  I will fix it in v2.

>
> There is probably a number of things that we could move there, but
> let's start by not adding more unnecessary stuff to the vcpu
> structure.

Yeah, I agree.

Thank you,
Reiji



>
> >  };
> >
> >  /*
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 07d37ff88a3f..44b84fbdde0d 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -82,6 +82,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
> >      */
> >     if (kvm_arm_support_pmu_v3()) {
> >             write_sysreg(0, pmselr_el0);
> > +           vcpu->arch.host_pmuserenr_el0 = read_sysreg(pmuserenr_el0);
> >             write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
> >     }
> >
> > @@ -106,7 +107,7 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
> >
> >     write_sysreg(0, hstr_el2);
> >     if (kvm_arm_support_pmu_v3())
> > -           write_sysreg(0, pmuserenr_el0);
> > +           write_sysreg(vcpu->arch.host_pmuserenr_el0, pmuserenr_el0);
> >
> >     if (cpus_have_final_cap(ARM64_SME)) {
> >             sysreg_clear_set_s(SYS_HFGRTR_EL2, 0,
>
> Thanks,
>
>       M.
>
> --
> Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list