[PATCH v2 14/54] KVM: arm/arm64: vgic-new: Implement virtual IRQ injection

Andre Przywara andre.przywara at arm.com
Thu May 5 04:24:56 PDT 2016


Hi Tom,

thanks for looking at the patches!

On 04/05/16 00:46, Tom Hanson wrote:
> On 04/28/2016 10:45 AM, Andre Przywara wrote:
> ...
>> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
>> index fb45537..92b78a0 100644
>> --- a/virt/kvm/arm/vgic/vgic.c
>> +++ b/virt/kvm/arm/vgic/vgic.c
>> @@ -19,8 +19,31 @@
> ... 
>> +/*
>> + * Locking order is always:
>> + *   vgic_cpu->ap_list_lock
>> + *     vgic_irq->irq_lock
>> + *
>> + * (that is, always take the ap_list_lock before the struct vgic_irq
>> lock).
>> + *
>> + * When taking more than one ap_list_lock at the same time, always
>> take the
>> + * lowest numbered VCPU's ap_list_lock first, so:
>> + *   vcpuX->vcpu_id < vcpuY->vcpu_id:
>> + *     spin_lock(vcpuX->arch.vgic_cpu.ap_list_lock);
>> + *     spin_lock(vcpuY->arch.vgic_cpu.ap_list_lock);
>> + */
> 
> The code would be safer and easier to maintain in the long run if there
> were functions provided which implement these 2 rules.  Something along
> the line of:
> 
>   void vgic_lock_aplist_irq(spinlock_t ap_list_lock, spinlock_t irq_lock){
>       spin_lock(ap_list_lock);
>       spin_lock(irq_lock);
>   }

The idea isn't too bad indeed.
I see that we could use that in vgic_queue_irq_unlock() and
vgic_mmio_write_sactive().
But as Marc mentioned in a conversation yesterday we will have a mixture
of wrapped locks and open coded lock sequences. See for instance
vgic_prune_ap_list(), where we have the sequence, but we can't use
vgic_lock_aplist_irq() because the IRQ lock is taken inside the loop
while the ap_list_lock is taken once outside of it.

Marc, Christoffer, what is your opinion here?

> and  (borrowing from patch 16/54):
> 
>   void vgic_lock_cpu_aplist_pair(struct kvm_vcpu *vcpu1, struct kvm_vcpu
> *vcpu2){
>       struct kvm_vcpu *vcpuA, *vcpuB;
>       /*
>        * Ensure locking order by always locking the smallest
>        * ID first.
>        */
>      if (vcpu1->vcpu_id < vcpu2->vcpu_id) {
>           vcpuA = vcpu1;
>           vcpuB = vcpu2;
>       } else {
>           vcpuA = vcpu2;
>           vcpuB = vcpu1;
>       }
> 
>       spin_lock(&vcpuA->arch.vgic_cpu.ap_list_lock);
>       spin_lock(&vcpuB->arch.vgic_cpu.ap_list_lock);
>   }

I am not sure this is worth it. We have exactly one user of this
sequence, so I'd rather keep it and the comments there.
Also the use case is probably confined to the core VGIC code.

Cheers,
Andre.

> 
> With, of course the matching unlock functions.
> 
> Then, as long as new code calls the correct function, the order is
> always correct.  Easy to write, easy to review.  And beats the heck out
> of chasing a deadlock at some point in the future.
> 



More information about the linux-arm-kernel mailing list