[PATCH 1/1] KVM: arm64: PMU: Avoid inappropriate use of host's PMUVer

Reiji Watanabe reijiw at google.com
Sat Jun 10 21:54:30 PDT 2023


Hi Oliver,

On Sat, Jun 10, 2023 at 05:57:34PM -0700, Oliver Upton wrote:
> Hi Reiji,
> 
> On Sat, Jun 10, 2023 at 12:45:10PM -0700, Reiji Watanabe wrote:
> > @@ -735,7 +736,7 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
> >  		 * Don't advertise STALL_SLOT, as PMMIR_EL0 is handled
> >  		 * as RAZ
> >  		 */
> > -		if (vcpu->kvm->arch.arm_pmu->pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P4)
> > +		if (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P4)
> >  			val &= ~BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32);
> 
> I don't think this conditional masking is correct in the first place,

I'm not sure why this conditional masking is correct.
Could you please elaborate ?


> and this change would only make it worse.
> 
> We emulate reads of PMCEID1_EL0 using the literal value of the CPU. The
> _advertised_ PMU version has no bearing on the core PMU version. So,
> assuming we hit this on a v3p5+ part with userspace (stupidly)
> advertising an older implementation level, we never clear the bit for
> STALL_SLOT.

I'm not sure if I understand this comment correctly.
When the guest's PMUVer is older than v3p4, I don't think we need
to clear the bit for STALL_SLOT, as PMMIR_EL1 is not implemented
for the guest (PMMIR_EL1 is implemented only on v3p4 or newer).
Or am I missing something ?

BTW, as KVM doesn't expose vPMU to the guest on non-uniform PMUVer
systems (as the sanitized value of ID_AA64DFR0_EL1.PMUVer on such
systems is zero), it is unlikely that the guest on such systems will
read this register (KVM should inject UNDEFINED in this case,
although KVM doesn't do that).


> 
> So let's just fix the issue by unconditionally masking the bit.
> 
> >  		base = 32;
> >  	}
> > @@ -932,11 +933,17 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
> >  		return 0;
> >  	}
> >  	case KVM_ARM_VCPU_PMU_V3_FILTER: {
> > +		u8 pmuver = kvm_arm_pmu_get_pmuver_limit();
> >  		struct kvm_pmu_event_filter __user *uaddr;
> >  		struct kvm_pmu_event_filter filter;
> >  		int nr_events;
> >  
> > -		nr_events = kvm_pmu_event_mask(kvm) + 1;
> > +		/*
> > +		 * Allow userspace to specify an event filter for the entire
> > +		 * event range supported by PMUVer of the hardware, rather
> > +		 * than the guest's PMUVer for KVM backward compatibility.
> > +		 */
> > +		nr_events = __kvm_pmu_event_mask(pmuver) + 1;
> 
> This is a rather signifcant change from the existing behavior though,
> no?
> 
> The 'raw' PMU version of the selected instance has been used as the
> basis of the maximum event list, but this uses the sanitised value. I'd
> rather we consistently use the selected PMU instance as the basis for
> all guest-facing PMU emulation.
> 
> I get that asymmetry in this deparment is exceedingly rare in the wild,
> but I'd rather keep a consistent model in the PMU emulation code where
> all our logic is based on the selected PMU instance.

Oh, sorry, I forget to update this from the previous (slightly different)
series [1], where kvm_arm_pmu_get_pmuver_limit() returned
kvm->arch.arm_pmu->pmuver.  Although the sanitized value will always be
the same as kvm->arch.arm_pmu->pmuver with the series [2], I don't meant
to change this in this patch.

[1] https://lore.kernel.org/all/20230527040236.1875860-1-reijiw@google.com/
[2] https://lore.kernel.org/all/20230610061520.3026530-1-reijiw@google.com/

Thank you,
Reiji



More information about the linux-arm-kernel mailing list