[PATCH v2 2/4] KVM: arm64: vgic: Explicitly implement vgic_dist::ready ordering

Yao Yuan yaoyuan0329os at gmail.com
Fri Jul 18 19:23:53 PDT 2025


On Fri, Jul 18, 2025 at 03:25:46PM +0000, Keir Fraser wrote:
> On Fri, Jul 18, 2025 at 02:53:42PM +0000, Keir Fraser wrote:
> > On Thu, Jul 17, 2025 at 01:44:48PM +0800, Yao Yuan wrote:
> > > On Wed, Jul 16, 2025 at 11:07:35AM +0800, Keir Fraser wrote:
> > > > In preparation to remove synchronize_srcu() from MMIO registration,
> > > > remove the distributor's dependency on this implicit barrier by
> > > > direct acquire-release synchronization on the flag write and its
> > > > lock-free check.
> > > >
> > > > Signed-off-by: Keir Fraser <keirf at google.com>
> > > > ---
> > > >  arch/arm64/kvm/vgic/vgic-init.c | 11 ++---------
> > > >  1 file changed, 2 insertions(+), 9 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
> > > > index 502b65049703..bc83672e461b 100644
> > > > --- a/arch/arm64/kvm/vgic/vgic-init.c
> > > > +++ b/arch/arm64/kvm/vgic/vgic-init.c
> > > > @@ -567,7 +567,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
> > > >  	gpa_t dist_base;
> > > >  	int ret = 0;
> > > >
> > > > -	if (likely(dist->ready))
> > > > +	if (likely(smp_load_acquire(&dist->ready)))
> > > >  		return 0;
> > > >
> > > >  	mutex_lock(&kvm->slots_lock);
> > > > @@ -598,14 +598,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
> > > >  		goto out_slots;
> > > >  	}
> > > >
> > > > -	/*
> > > > -	 * kvm_io_bus_register_dev() guarantees all readers see the new MMIO
> > > > -	 * registration before returning through synchronize_srcu(), which also
> > > > -	 * implies a full memory barrier. As such, marking the distributor as
> > > > -	 * 'ready' here is guaranteed to be ordered after all vCPUs having seen
> > > > -	 * a completely configured distributor.
> > > > -	 */
> > > > -	dist->ready = true;
> > > > +	smp_store_release(&dist->ready, true);
> > >
> > > No need the store-release and load-acquire for replacing
> > > synchronize_srcu_expedited() w/ call_srcu() IIUC:
> > >
> > > Tree SRCU on SMP:
> > > call_srcu()
> > >  __call_srcu()
> > >   srcu_gp_start_if_needed()
> > >     __srcu_read_unlock_nmisafe()
> > > 	 #ifdef	CONFIG_NEED_SRCU_NMI_SAFE
> > > 	   	  smp_mb__before_atomic() // __smp_mb() on ARM64, do nothing on x86.
> > > 	 #else
> > >           __srcu_read_unlock()
> > > 		   smp_mb()
> > > 	 #endif
> >
> > I don't think it's nice to depend on an implementation detail of
> > kvm_io_bus_register_dev() and, transitively, on implementation details
> > of call_srcu().

This is good point, I agree with you.

>
> Also I should note that this is moot because the smp_mb() would *not*
> safely replace the load-acquire.

Hmm.. do you mean it can't order the write to dist->ready
here while read it in aonther thread at the same time ?

>
> > kvm_vgic_map_resources() isn't called that often and can afford its
> > own synchronization.
> >
> >  -- Keir
> >
> > > TINY SRCY on UP:
> > > Should have no memory ordering issue on UP.
> > >
> > > >  	goto out_slots;
> > > >  out:
> > > >  	mutex_unlock(&kvm->arch.config_lock);
> > > > --
> > > > 2.50.0.727.gbf7dc18ff4-goog
> > > >
>



More information about the linux-arm-kernel mailing list