[PATCH] KVM: arm64: Fix smp_processor_id() call in preemptible context
Oliver Upton
oliver.upton at linux.dev
Tue Jun 6 09:48:14 PDT 2023
On Tue, Jun 06, 2023 at 05:17:34PM +0100, Marc Zyngier wrote:
> On Tue, 06 Jun 2023 15:10:44 +0100, Oliver Upton <oliver.upton at linux.dev> wrote:
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index 491ca7eb2a4c..933a6331168b 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -700,7 +700,7 @@ static struct arm_pmu *kvm_pmu_probe_armpmu(void)
> >
> > mutex_lock(&arm_pmus_lock);
> >
> > - cpu = smp_processor_id();
> > + cpu = raw_smp_processor_id();
> > list_for_each_entry(entry, &arm_pmus, entry) {
> > tmp = entry->arm_pmu;
> >
> >
>
> If preemption doesn't matter (and I really don't think it does), why
> are we looking for a the current CPU? I'd rather we pick the PMU that
> is associated with CPU0 (we're pretty sure it exists), and be done
> with it.
Getting the current CPU is still useful, we just don't care about that
cpu# being stale. Unconditionally using CPU0 could break existing usage
patterns.
A not-too-contrived example would be to taskset QEMU onto a cluster of
cores in a big.LITTLE system (I do this). The current behavior would
assign the right PMU to the guest. I've made my opinions about the 'old'
ABI quite clear, but I don't have too great of an appetite for breakage,
though fragile.
Can we proceed with the fix I had suggested along with a more complete
description of the baggage that we're carrying?
--
Thanks,
Oliver
More information about the linux-arm-kernel
mailing list