[PATCH v3 3/4] KVM: arm64: GICv4.1: Restore VLPI's pending state to physical side
Marc Zyngier
maz at kernel.org
Fri Mar 12 12:02:24 GMT 2021
On Fri, 12 Mar 2021 11:34:07 +0000,
Shenming Lu <lushenming at huawei.com> wrote:
>
> On 2021/3/12 19:10, Marc Zyngier wrote:
> > On Fri, 12 Mar 2021 10:48:29 +0000,
> > Shenming Lu <lushenming at huawei.com> wrote:
> >>
> >> On 2021/3/12 17:05, Marc Zyngier wrote:
> >>> On Thu, 11 Mar 2021 12:32:07 +0000,
> >>> Shenming Lu <lushenming at huawei.com> wrote:
> >>>>
> >>>> On 2021/3/11 17:14, Marc Zyngier wrote:
> >>>>> On Wed, 27 Jan 2021 12:13:36 +0000,
> >>>>> Shenming Lu <lushenming at huawei.com> wrote:
> >>>>>>
> >>>>>> From: Zenghui Yu <yuzenghui at huawei.com>
> >>>>>>
> >>>>>> When setting the forwarding path of a VLPI (switch to the HW mode),
> >>>>>> we could also transfer the pending state from irq->pending_latch to
> >>>>>> VPT (especially in migration, the pending states of VLPIs are restored
> >>>>>> into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
> >>>>>> a VLPI to pending.
> >>>>>>
> >>>>>> Signed-off-by: Zenghui Yu <yuzenghui at huawei.com>
> >>>>>> Signed-off-by: Shenming Lu <lushenming at huawei.com>
> >>>>>> ---
> >>>>>> arch/arm64/kvm/vgic/vgic-v4.c | 14 ++++++++++++++
> >>>>>> 1 file changed, 14 insertions(+)
> >>>>>>
> >>>>>> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
> >>>>>> index ac029ba3d337..a3542af6f04a 100644
> >>>>>> --- a/arch/arm64/kvm/vgic/vgic-v4.c
> >>>>>> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
> >>>>>> @@ -449,6 +449,20 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
> >>>>>> irq->host_irq = virq;
> >>>>>> atomic_inc(&map.vpe->vlpi_count);
> >>>>>>
> >>>>>> + /* Transfer pending state */
> >>>>>> + if (irq->pending_latch) {
> >>>>>> + ret = irq_set_irqchip_state(irq->host_irq,
> >>>>>> + IRQCHIP_STATE_PENDING,
> >>>>>> + irq->pending_latch);
> >>>>>> + WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
> >>>>>> +
> >>>>>> + /*
> >>>>>> + * Let it be pruned from ap_list later and don't bother
> >>>>>> + * the List Register.
> >>>>>> + */
> >>>>>> + irq->pending_latch = false;
> >>>>>
> >>>>> NAK. If the interrupt is on the AP list, it must be pruned from it
> >>>>> *immediately*. The only case where it can be !pending and still on the
> >>>>> AP list is in interval between sync and prune. If we start messing
> >>>>> with this, we can't reason about the state of this list anymore.
> >>>>>
> >>>>> Consider calling vgic_queue_irq_unlock() here.
> >>>>
> >>>> Thanks for giving a hint, but it seems that vgic_queue_irq_unlock() only
> >>>> queues an IRQ after checking, did you mean vgic_prune_ap_list() instead?
> >>>
> >>> No, I really mean vgic_queue_irq_unlock(). It can be used to remove
> >>> the pending state from an interrupt, and drop it from the AP
> >>> list. This is exactly what happens when clearing the pending state of
> >>> a level interrupt, for example.
> >>
> >> Hi, I have gone through vgic_queue_irq_unlock more than once, but
> >> still can't find the place in it to drop an IRQ from the AP
> >> list... Did I miss something ?... Or could you help to point it
> >> out? Thanks very much for this!
> >
> > NO, you are right. I think this is a missing optimisation. Please call
> > the function anyway, as that's what is required to communicate a
> > change of state in general.>
> > I'll have a think about it.
>
> Maybe we could call vgic_prune_ap_list() if (irq->vcpu &&
> !vgic_target_oracle(irq)) in vgic_queue_irq_unlock()...
The locking is pretty ugly in this case, and I don't want to reparse
the whole AP list. It is basically doing the same work as the
insertion, but with a list_del() instead of a list_add()...
We can live without it for now.
> OK, I will retest this series and send a v4 soon. :-)
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
More information about the linux-arm-kernel
mailing list