[RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

Anup Patel anup at brainfault.org
Tue Jan 13 20:28:32 PST 2015


On Mon, Jan 12, 2015 at 12:41 AM, Christoffer Dall
<christoffer.dall at linaro.org> wrote:
> On Tue, Dec 30, 2014 at 11:19:13AM +0530, Anup Patel wrote:
>> (dropping previous conversation for easy reading)
>>
>> Hi Marc/Christoffer,
>>
>> I tried implementing PMU context-switch via C code
>> in EL1 mode and in atomic context with irqs disabled.
>> The context switch itself works perfectly fine but
>> irq forwarding is not clean for PMU irq.
>>
>> I found another issue that is GIC only samples irq
>> lines if they are enabled. This means for using
>> irq forwarding we will need to ensure that host PMU
>> irq is enabled.  The arch_timer code does this by
>> doing request_irq() for host virtual timer interrupt.
>> For PMU, we can either enable/disable host PMU
>> irq in context switch or we need to do have shared
>> irq handler between kvm pmu and host kernel pmu.
>
> could we simply require the host PMU driver to request the IRQ and have
> the driver inject the corresponding IRQ to the VM via a mechanism
> similar to VFIO using an eventfd and irqfds etc.?
>
> (I haven't quite thought through if there's a way for the host PMU
> driver to distinguish between an IRQ for itself and one for the guest,
> though).
>
> It does feel like we will need some sort of communication/coordination
> between the host PMU driver and KVM...
>
>>
>> I have rethinked about our discussion so far. I
>> understand that we need KVM PMU virtualization
>> to meet following criteria:
>> 1. No modification in host PMU driver
>
> is this really a strict requirement?  one of the advantages of KVM
> should be that the rest of the kernel should be supportive of KVM.
>
>> 2. No modification in guest PMU driver
>> 3. No mask/unmask dance for sharing host PMU irq
>> 4. Clean way to avoid infinite VM exits due to
>> PMU interrupt
>>
>> I have discovered new approach which is as follows:
>> 1. Context switch PMU in atomic context (i.e. local_irq_disable())
>> 2. Ensure that host PMU irq is disabled when entering guest
>> mode and re-enable host PMU irq when exiting guest mode if
>> it was enabled previously.
>
> How does this look like software-engineering wise?  Would you be looking
> up the IRQ number from the DT in the KVM code again?  How does KVM then
> synchronize with the host PMU driver so they're not both requesting the
> same IRQ at the same time?
>
>> This is to avoid infinite VM exits
>> due to PMU interrupt because as-per new approach we
>> don't mask the PMU irq via PMINTENSET_EL1 register.
>> 3. Inject virtual PMU irq at time of entering guest mode if PMU
>> overflow register is non-zero (i.e. PMOVSSET_EL0) in atomic
>> context (i.e. local_irq_disable()).
>>
>> The only limitation of this new approach is that virtual PMU irq
>> is injected at time of entering guest mode. This means guest
>> will receive virtual PMU  interrupt with little delay after actual
>> interrupt occurred.
>
> it may never receive it in the case of a tickless configuration AFAICT,
> so this doesn't sound like the right approach.

The PMU interrupts are not similar to arch_timer interrupts. In fact,
they are overflow interrupts on event counters. The PMU events
of Guest VCPU are only counted when Guest VCPU is running.
If the Guest VCPU is scheduled out or we are in Host mode then
then PMU events are counted for Host or other Guest whoever is
running currently.

In my view, this does not break tickless guest.

Also, the above fact applies irrespective to the approach we take
for PMU virtualization.

Regards,
Anup

>
>> The PMU interrupts are only overflow events
>> and generally not used in any timing critical applications. If we
>> can live with this limitation then this can be a good approach
>> for KVM PMU virtualization.
>>
> Thanks,
> -Christoffer



More information about the linux-arm-kernel mailing list