[PATCH v3 3/4] KVM: arm64: Add KVM_ARM_VCPU_PMU_V3_SET_PMU attribute

Marc Zyngier maz at kernel.org
Fri Jan 7 06:35:20 PST 2022


On Fri, 07 Jan 2022 11:08:05 +0000,
Alexandru Elisei <alexandru.elisei at arm.com> wrote:
> 
> Hi Marc,
> 
> On Thu, Jan 06, 2022 at 06:16:04PM +0000, Marc Zyngier wrote:
> > On Thu, 06 Jan 2022 11:54:11 +0000,
> > Alexandru Elisei <alexandru.elisei at arm.com> wrote:
> > > 
> > > 2. What's to stop userspace to change the PMU after at least one VCPU has
> > > run? That can be easily observed by the guest when reading PMCEIDx_EL0.
> > 
> > That's a good point. We need something here. It is a bit odd as to do
> > that, you need to fully enable a PMU on one CPU, but not on the other,
> > then run the first while changing stuff on the other. Something along
> > those lines (untested):
> > 
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 4bf28905d438..4f53520e84fd 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -139,6 +139,7 @@ struct kvm_arch {
> >  
> >  	/* Memory Tagging Extension enabled for the guest */
> >  	bool mte_enabled;
> > +	bool ran_once;
> >  };
> >  
> >  struct kvm_vcpu_fault_info {
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 83297fa97243..3045d7f609df 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -606,6 +606,10 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
> >  
> >  	vcpu->arch.has_run_once = true;
> >  
> > +	mutex_lock(&kvm->lock);
> > +	kvm->arch.ran_once = true;
> > +	mutex_unlock(&kvm->lock);
> > +
> >  	kvm_arm_vcpu_init_debug(vcpu);
> >  
> >  	if (likely(irqchip_in_kernel(kvm))) {
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index dfc0430d6418..95100c541244 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -959,8 +959,9 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
> >  		arm_pmu = entry->arm_pmu;
> >  		if (arm_pmu->pmu.type == pmu_id) {
> >  			/* Can't change PMU if filters are already in place */
> > -			if (kvm->arch.arm_pmu != arm_pmu &&
> > -			    kvm->arch.pmu_filter) {
> > +			if ((kvm->arch.arm_pmu != arm_pmu &&
> > +			     kvm->arch.pmu_filter) ||
> > +			    kvm->arch.ran_once) {
> >  				ret = -EBUSY;
> >  				break;
> >  			}
> > @@ -1040,6 +1041,11 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
> >  
> >  		mutex_lock(&vcpu->kvm->lock);
> >  
> > +		if (vcpu->kvm->arch.ran_once) {
> > +			mutex_unlock(&vcpu->kvm->lock);
> > +			return -EBUSY;
> > +		}
> > +
> >  		if (!vcpu->kvm->arch.pmu_filter) {
> >  			vcpu->kvm->arch.pmu_filter = bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT);
> >  			if (!vcpu->kvm->arch.pmu_filter) {
> > 
> > which should prevent both PMU or filters to be changed once a single
> > vcpu as run.
> > 
> > Thoughts?
> 
> Haven't tested it either, but it looks good to me. If you agree, I can pick
> the diff, turn it into a patch and send it for the next iteration of this
> series as a fix for the PMU events filter, while keeping your authorship.

Of course, please help yourself! :-)

	M.

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list