[PATCH v4 07/14] KVM: arm64: Rework specifying restricted features for protected VMs

Fuad Tabba tabba at google.com
Wed Dec 11 05:11:22 PST 2024


On Wed, 11 Dec 2024 at 12:34, Quentin Perret <qperret at google.com> wrote:
>
> On Monday 02 Dec 2024 at 15:47:34 (+0000), Fuad Tabba wrote:
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index f333b189fb43..230b0638f0c2 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -1422,6 +1422,7 @@ static inline bool __vcpu_has_feature(const struct kvm_arch *ka, int feature)
> >       return test_bit(feature, ka->vcpu_features);
> >  }
> >
> > +#define kvm_vcpu_has_feature(k, f)   __vcpu_has_feature(&(k)->arch, (f))
> >  #define vcpu_has_feature(v, f)       __vcpu_has_feature(&(v)->kvm->arch, (f))
>
> Nit: I see nested uses the raw __vcpu_has_feature() helper, so I guess
> we should try and be consistent. Either way works, we can do the same
> thing in sys_regs.c, or convert nested.c to use kvm_vcpu_has_feature().

I'll add a patch on the respin to fix the nested callers. The lines
that call these macros/helpers are already quite long, and even though
it will add a bit of churn, imo the resulting code is more readable
with a kvm_vcpu_has_feature() helper.

Cheers,
/fuad



More information about the linux-arm-kernel mailing list