[PATCH v2 2/4] KVM: arm64: vgic: Explicitly implement vgic_dist::ready ordering
Yao Yuan
yaoyuan0329os at gmail.com
Fri Jul 18 19:15:56 PDT 2025
On Fri, Jul 18, 2025 at 08:00:17AM -0700, Sean Christopherson wrote:
> On Thu, Jul 17, 2025, Yao Yuan wrote:
> > On Wed, Jul 16, 2025 at 11:07:35AM +0800, Keir Fraser wrote:
> > > In preparation to remove synchronize_srcu() from MMIO registration,
> > > remove the distributor's dependency on this implicit barrier by
> > > direct acquire-release synchronization on the flag write and its
> > > lock-free check.
> > >
> > > Signed-off-by: Keir Fraser <keirf at google.com>
> > > ---
> > > arch/arm64/kvm/vgic/vgic-init.c | 11 ++---------
> > > 1 file changed, 2 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
> > > index 502b65049703..bc83672e461b 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-init.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-init.c
> > > @@ -567,7 +567,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
> > > gpa_t dist_base;
> > > int ret = 0;
> > >
> > > - if (likely(dist->ready))
> > > + if (likely(smp_load_acquire(&dist->ready)))
> > > return 0;
> > >
> > > mutex_lock(&kvm->slots_lock);
> > > @@ -598,14 +598,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
> > > goto out_slots;
> > > }
> > >
> > > - /*
> > > - * kvm_io_bus_register_dev() guarantees all readers see the new MMIO
> > > - * registration before returning through synchronize_srcu(), which also
> > > - * implies a full memory barrier. As such, marking the distributor as
> > > - * 'ready' here is guaranteed to be ordered after all vCPUs having seen
> > > - * a completely configured distributor.
> > > - */
> > > - dist->ready = true;
> > > + smp_store_release(&dist->ready, true);
> >
> > No need the store-release and load-acquire for replacing
> > synchronize_srcu_expedited() w/ call_srcu() IIUC:
>
> This isn't about using call_srcu(), because it's not actually about kvm->buses.
> This code is concerned with ensuring that all stores to kvm->arch.vgic are ordered
> before the store to set kvm->arch.vgic.ready, so that vCPUs never see "ready==true"
> with a half-baked distributor.
>
> In the current code, kvm_vgic_map_resources() relies on the synchronize_srcu() in
> kvm_io_bus_register_dev() to provide the ordering guarantees. Switching to
> smp_store_release() + smp_load_acquire() removes the dependency on the
> synchronize_srcu() so that the synchronize_srcu() call can be safely removed.
Yes, I understand this and agree with your point.
Just for discusstion: I thought it should also work even w/o
introduce the load acqure + store release after switch to
call_srcu(): The smp_mb() in call_srcu() order the all store
to kvm->arch.vgic before store kvm->arch.vgic.ready in
current implementation.
>
More information about the linux-arm-kernel
mailing list