[PATCH] KVM: arm64: Disabling disabled PMU counters wastes a lot of time

Marc Zyngier maz at kernel.org
Tue Jun 29 07:25:13 PDT 2021


On 2021-06-29 15:17, Alexandre Chartre wrote:
> On 6/29/21 3:47 PM, Marc Zyngier wrote:
>> On Tue, 29 Jun 2021 14:16:55 +0100,
>> Alexandre Chartre <alexandre.chartre at oracle.com> wrote:
>>> 
>>> 
>>> Hi Marc,
>>> 
>>> On 6/29/21 11:06 AM, Marc Zyngier wrote:
>>>> Hi Alexandre,
>> 
>> [...]
>> 
>>>> So the sysreg is the only thing we should consider, and I think we
>>>> should drop the useless masking. There is at least another instance 
>>>> of
>>>> this in the PMU code (kvm_pmu_overflow_status()), and apart from
>>>> kvm_pmu_vcpu_reset(), only the sysreg accessors should care about 
>>>> the
>>>> masking to sanitise accesses.
>>>> 
>>>> What do you think?
>>>> 
>>> 
>>> I think you are right. PMCNTENSET_EL0 is already masked with
>>> kvm_pmu_valid_counter_mask() so there's effectively no need to mask
>>> it again when we use it. I will send an additional patch (on top of
>>> this one) to remove useless masking. Basically, changes would be:
>>> 
>>> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
>>> index bab4b735a0cf..e0dfd7ce4ba0 100644
>>> --- a/arch/arm64/kvm/pmu-emul.c
>>> +++ b/arch/arm64/kvm/pmu-emul.c
>>> @@ -373,7 +373,6 @@ static u64 kvm_pmu_overflow_status(struct 
>>> kvm_vcpu *vcpu)
>>>                  reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>>>                  reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>>>                  reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
>>> -               reg &= kvm_pmu_valid_counter_mask(vcpu);
>>>          }
>>>           return reg;
>>> @@ -564,21 +563,22 @@ void kvm_pmu_software_increment(struct kvm_vcpu 
>>> *vcpu, u64 val)
>>>    */
>>>   void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>>>   {
>>> -       unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
>>> +       unsigned long mask;
>>>          int i;
>>>           if (val & ARMV8_PMU_PMCR_E) {
>>>                  kvm_pmu_enable_counter_mask(vcpu,
>>> -                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
>>> +                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>>>          } else {
>>>                  kvm_pmu_disable_counter_mask(vcpu,
>>> -                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
>>> +                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>>>          }
>>>           if (val & ARMV8_PMU_PMCR_C)
>>>                  kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 
>>> 0);
>>>           if (val & ARMV8_PMU_PMCR_P) {
>>> +               mask = kvm_pmu_valid_counter_mask(vcpu);
>> 
>> Careful here, this clashes with a fix from Alexandru that is currently
>> in -next (PMCR_EL0.P shouldn't reset the cycle counter) and aimed at
>> 5.14. And whilst you're at it, consider moving the 'mask' declaration
>> here too.
>> 
>>>                  for_each_set_bit(i, &mask, 32)
>>>                          kvm_pmu_set_counter_value(vcpu, i, 0);
>>>          }
>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> index 1a7968ad078c..2e406905760e 100644
>>> --- a/arch/arm64/kvm/sys_regs.c
>>> +++ b/arch/arm64/kvm/sys_regs.c
>>> @@ -845,7 +845,7 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, 
>>> struct sys_reg_params *p,
>>>                          kvm_pmu_disable_counter_mask(vcpu, val);
>>>                  }
>>>          } else {
>>> -               p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & 
>>> mask;
>>> +               p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>>>          }
>>>           return true;
>> 
>> If you are cleaning up the read-side of sysregs, access_pminten() and
>> access_pmovs() could have some of your attention too.
>> 
> 
> Ok, so for now, I will just resubmit the initial patch with the commit
> comment fixes. Then, look at all the mask cleanup on top of Alexandru
> changes and prepare another patch.

Please send this as a series rather than individual patches. I'm only
queuing critical fixes at the moment (this is the merge window).
If you post the series after -rc1, I'll queue it and let it simmer
in -next.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...



More information about the linux-arm-kernel mailing list