[PATCH v5 08/14] KVM: arm64: Protect stage-2 traversal with RCU

Marc Zyngier maz at kernel.org
Thu Nov 10 05:34:14 PST 2022


On Wed, 09 Nov 2022 22:25:38 +0000,
Ben Gardon <bgardon at google.com> wrote:
> 
> On Mon, Nov 7, 2022 at 1:57 PM Oliver Upton <oliver.upton at linux.dev> wrote:
> >
> > Use RCU to safely walk the stage-2 page tables in parallel. Acquire and
> > release the RCU read lock when traversing the page tables. Defer the
> > freeing of table memory to an RCU callback. Indirect the calls into RCU
> > and provide stubs for hypervisor code, as RCU is not available in such a
> > context.
> >
> > The RCU protection doesn't amount to much at the moment, as readers are
> > already protected by the read-write lock (all walkers that free table
> > memory take the write lock). Nonetheless, a subsequent change will
> > futher relax the locking requirements around the stage-2 MMU, thereby
> > depending on RCU.
> >
> > Signed-off-by: Oliver Upton <oliver.upton at linux.dev>
> > ---
> >  arch/arm64/include/asm/kvm_pgtable.h | 49 ++++++++++++++++++++++++++++
> >  arch/arm64/kvm/hyp/pgtable.c         | 10 +++++-
> >  arch/arm64/kvm/mmu.c                 | 14 +++++++-
> >  3 files changed, 71 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> > index e70cf57b719e..7634b6964779 100644
> > --- a/arch/arm64/include/asm/kvm_pgtable.h
> > +++ b/arch/arm64/include/asm/kvm_pgtable.h
> > @@ -37,6 +37,13 @@ static inline u64 kvm_get_parange(u64 mmfr0)
> >
> >  typedef u64 kvm_pte_t;
> >
> > +/*
> > + * RCU cannot be used in a non-kernel context such as the hyp. As such, page
> > + * table walkers used in hyp do not call into RCU and instead use other
> > + * synchronization mechanisms (such as a spinlock).
> > + */
> > +#if defined(__KVM_NVHE_HYPERVISOR__) || defined(__KVM_VHE_HYPERVISOR__)
> > +
> >  typedef kvm_pte_t *kvm_pteref_t;
> >
> >  static inline kvm_pte_t *kvm_dereference_pteref(kvm_pteref_t pteref, bool shared)
> > @@ -44,6 +51,40 @@ static inline kvm_pte_t *kvm_dereference_pteref(kvm_pteref_t pteref, bool shared
> >         return pteref;
> >  }
> >
> > +static inline void kvm_pgtable_walk_begin(void) {}
> > +static inline void kvm_pgtable_walk_end(void) {}
> > +
> > +static inline bool kvm_pgtable_walk_lock_held(void)
> > +{
> > +       return true;
> 
> Forgive my ignorance, but does hyp not use a MMU lock at all? Seems
> like this would be a good place to add a lockdep check.

For normal KVM, we don't mess with the page tables in the HYP code *at
all*. That's just not the place. It is for pKVM that this is a bit
different, as EL2 is where the stuff happens.

Lockdep at EL2 is wishful thinking. However, we have the next best
thing, which is an assertion such as:

	hyp_assert_lock_held(&host_kvm.lock);

though at the moment, this is a *global* lock that serialises
everyone, as a guest stage-2 operation usually affects the host
stage-2 as well (ownership change and such). Quentin should be able to
provide more details on that.

	M.

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list