[RFC PATCH 06/17] KVM: arm64: Implement break-before-make sequence for parallel walks

Oliver Upton oupton at google.com
Wed Apr 20 10:06:47 PDT 2022


On Wed, Apr 20, 2022 at 9:55 AM Quentin Perret <qperret at google.com> wrote:
>
> On Friday 15 Apr 2022 at 21:58:50 (+0000), Oliver Upton wrote:
> > +/*
> > + * Used to indicate a pte for which a 'make-before-break' sequence is in
>
> 'break-before-make' presumably :-) ?

Gosh, I'd certainly hope so! ;)

> <snip>
> > +static void stage2_make_pte(kvm_pte_t *ptep, kvm_pte_t new, struct kvm_pgtable_mm_ops *mm_ops)
> > +{
> > +     /* Yikes! We really shouldn't install to an entry we don't own. */
> > +     WARN_ON(!stage2_pte_is_locked(*ptep));
> > +
> > +     if (stage2_pte_is_counted(new))
> > +             mm_ops->get_page(ptep);
> > +
> > +     if (kvm_pte_valid(new)) {
> > +             WRITE_ONCE(*ptep, new);
> > +             dsb(ishst);
> > +     } else {
> > +             smp_store_release(ptep, new);
> > +     }
> > +}
>
> I'm struggling a bit to understand this pattern. We currently use
> smp_store_release() to install valid mappings, which this patch seems
> to change. Is the behaviour change intentional? If so, could you please
> share some details about the reasoning that applies here?

This is unintentional. We still need to do smp_store_release(),
especially considering we acquire a lock on the PTE in the break
pattern. In fact, I believe the compare-exchange could be loosened to
have only acquire semantics. What I had really meant to do here (but
goofed) is to avoid the DSB when changing between invalid PTEs.

Thanks for the review!

--
Best,
Oliver



More information about the linux-arm-kernel mailing list