[PATCH v2 07/15] KVM: arm64: Use an opaque type for pteps

Oliver Upton oliver.upton at linux.dev
Thu Oct 27 15:31:15 PDT 2022


On Thu, Oct 20, 2022 at 11:32:28AM +0300, Oliver Upton wrote:
> On Wed, Oct 19, 2022 at 11:17:43PM +0000, Sean Christopherson wrote:
> > On Fri, Oct 07, 2022, Oliver Upton wrote:

[...]

> > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > > index 02c33fccb178..6b6e1ed7ee2f 100644
> > > --- a/arch/arm64/kvm/hyp/pgtable.c
> > > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > > @@ -175,13 +175,14 @@ static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data,
> > >  }
> > >  
> > >  static int __kvm_pgtable_walk(struct kvm_pgtable_walk_data *data,
> > > -			      struct kvm_pgtable_mm_ops *mm_ops, kvm_pte_t *pgtable, u32 level);
> > > +			      struct kvm_pgtable_mm_ops *mm_ops, kvm_pteref_t pgtable, u32 level);
> > >  
> > >  static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
> > >  				      struct kvm_pgtable_mm_ops *mm_ops,
> > > -				      kvm_pte_t *ptep, u32 level)
> > > +				      kvm_pteref_t pteref, u32 level)
> > >  {
> > >  	enum kvm_pgtable_walk_flags flags = data->walker->flags;
> > > +	kvm_pte_t *ptep = kvm_dereference_pteref(pteref, false);
> > >  	struct kvm_pgtable_visit_ctx ctx = {
> > >  		.ptep	= ptep,
> > >  		.old	= READ_ONCE(*ptep),
> > 
> > This is where you want the protection to kick in, e.g. 
> > 
> >   typedef kvm_pte_t __rcu *kvm_ptep_t;
> > 
> >   static inline kvm_pte_t kvm_read_pte(kvm_ptep_t ptep)
> >   {
> > 	return READ_ONCE(*rcu_dereference(ptep));
> >   }
> > 
> > 		.old	= kvm_read_pte(ptep),
> > 
> > In other words, the pointer itself isn't that's protected, it's PTE that the
> > pointer points at that's protected.
> 
> Right, but practically speaking it is the boundary at which we assert
> that protection.
> 
> Anyhow, I'll look at abstracting the actual memory accesses in the
> visitors without too much mess.

Took this in a slightly different direction after playing with it for a
while. Abstracting all PTE accesses adds a lot of churn to the series.
Adding in an assertion before invoking a visitor callback (i.e. when the
raw pointer is about to be used) provides a similar degree of assurance
that we are indeed RCU-safe.

--
Thanks,
Oliver



More information about the linux-arm-kernel mailing list