[PATCH v3 7/7] KVM: arm64: Use TLBI range-based intructions for unmap

Oliver Upton oliver.upton at linux.dev
Tue May 16 11:46:39 PDT 2023


On Tue, May 16, 2023 at 10:21:33AM -0700, Raghavendra Rao Ananta wrote:
> On Fri, May 12, 2023 at 10:02 AM Oliver Upton <oliver.upton at linux.dev> wrote:
> > >  int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
> > >  {
> > > +     int ret;
> > > +     struct stage2_unmap_data unmap_data = {
> > > +             .pgt = pgt,
> > > +             /*
> > > +              * If FEAT_TLBIRANGE is implemented, defer the individial PTE
> > > +              * TLB invalidations until the entire walk is finished, and
> > > +              * then use the range-based TLBI instructions to do the
> > > +              * invalidations. Condition this upon S2FWB in order to avoid
> > > +              * a page-table walk again to perform the CMOs after TLBI.
> > > +              */
> > > +             .skip_pte_tlbis = system_supports_tlb_range() &&
> > > +                                     stage2_has_fwb(pgt),
> >
> > Why can't the underlying walker just call these two helpers directly?
> > There are static keys behind these...
> >
> I wasn't aware of that. Although, I tried to look into the
> definitions, but couldn't understand how static keys are at play here.
> By any chance are you referring to the alternative_has_feature_*()
> implementations when checking for cpu capabilities?

Ah, right, these were recently changed to rely on alternative patching
in commit 21fb26bfb01f ("arm64: alternatives: add alternative_has_feature_*()").
Even still, the significance remains as the alternative patching
completely eliminates a conditional branch on the presence of a
particular feature.

Initializing a local with the presence/absence of a feature defeats such
an optimization.

-- 
Thanks,
Oliver



More information about the linux-arm-kernel mailing list