[PATCH v2 20/45] KVM: arm64: Revamp vgic maintenance interrupt configuration

Marc Zyngier maz at kernel.org
Wed Nov 12 01:56:28 PST 2025


On Wed, 12 Nov 2025 08:45:45 +0000,
Oliver Upton <oupton at kernel.org> wrote:
> 
> On Wed, Nov 12, 2025 at 08:33:54AM +0000, Marc Zyngier wrote:
> > On Wed, 12 Nov 2025 00:08:37 +0000,
> > Oliver Upton <oupton at kernel.org> wrote:
> > > 
> > > On Sun, Nov 09, 2025 at 05:15:54PM +0000, Marc Zyngier wrote:
> > > > +static void summarize_ap_list(struct kvm_vcpu *vcpu,
> > > > +			      struct ap_list_summary *als)
> > > >  {
> > > >  	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
> > > >  	struct vgic_irq *irq;
> > > > -	int count = 0;
> > > > -
> > > > -	*multi_sgi = false;
> > > >  
> > > >  	lockdep_assert_held(&vgic_cpu->ap_list_lock);
> > > >  
> > > > -	list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) {
> > > > -		int w;
> > > > +	*als = (typeof(*als)){};
> > > >  
> > > > -		raw_spin_lock(&irq->irq_lock);
> > > > -		/* GICv2 SGIs can count for more than one... */
> > > > -		w = vgic_irq_get_lr_count(irq);
> > > > -		raw_spin_unlock(&irq->irq_lock);
> > > > +	list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) {
> > > > +		scoped_guard(raw_spinlock, &irq->irq_lock) {
> > > > +			if (vgic_target_oracle(irq) != vcpu)
> > > > +				continue;
> > > 
> > > From our conversation about this sort of thing a few weeks ago, wont
> > > this 'continue' interact pooly with the for loop that scoped_guard()
> > > expands to?
> > 
> > Gahhh... I was sure I had killed that everywhere, but obviously failed
> > to. I wish there was a coccinelle script to detect this sort of broken
> > constructs (where are the script kiddies when you really need them?).
> > 
> > Thanks for spotting it!
> > 
> > > Consistent with the other checks against the destination oracle you'll
> > > probably want a branch hint too.
> > 
> > Yup, I'll add that.
> 
> I can take care of it when applying. These patches need to bake :)

Yes, they do. Here's the current state of additional changes I have
(compile tested only).

Thanks,

	M.

diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c
index bd67ad1fcad5e..28184582f23d3 100644
--- a/arch/arm64/kvm/vgic/vgic.c
+++ b/arch/arm64/kvm/vgic/vgic.c
@@ -851,15 +851,15 @@ static void summarize_ap_list(struct kvm_vcpu *vcpu,
 	*als = (typeof(*als)){};
 
 	list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) {
-		scoped_guard(raw_spinlock, &irq->irq_lock) {
-			if (vgic_target_oracle(irq) != vcpu)
-				continue;
-
-			if (!irq->active)
-				als->nr_pend++;
-			else
-				als->nr_act++;
-		}
+		guard(raw_spinlock)(&irq->irq_lock);
+
+		if (unlikely(vgic_target_oracle(irq) != vcpu))
+			continue;
+
+		if (!irq->active)
+			als->nr_pend++;
+		else
+			als->nr_act++;
 
 		if (irq->intid < VGIC_NR_SGIS)
 			als->nr_sgi++;
@@ -915,8 +915,8 @@ static void summarize_ap_list(struct kvm_vcpu *vcpu,
  *
  *      - deactivation can happen in any order, and we cannot rely on
  *	  EOImode=0's coupling of priority-drop and deactivation which
- *	  imposes strict reverse Ack order. This means that DIR must be set
- *	  if we have active interrupts outside of the LRs.
+ *	  imposes strict reverse Ack order. This means that DIR must
+ *	  trap if we have active interrupts outside of the LRs.
  *
  *      - deactivation of SPIs can occur on any CPU, while the SPI is only
  *	  present in the ap_list of the CPU that actually ack-ed it. In that

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list