[PATCH 01/16] KVM: arm64: Generalise VM features into a set of flags

Marc Zyngier maz at kernel.org
Wed Jul 28 02:41:00 PDT 2021


On Tue, 27 Jul 2021 19:10:27 +0100,
Will Deacon <will at kernel.org> wrote:
> 
> On Thu, Jul 15, 2021 at 05:31:44PM +0100, Marc Zyngier wrote:
> > We currently deal with a set of booleans for VM features,
> > while they could be better represented as set of flags
> > contained in an unsigned long, similarily to what we are
> > doing on the CPU side.
> > 
> > Signed-off-by: Marc Zyngier <maz at kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 12 +++++++-----
> >  arch/arm64/kvm/arm.c              |  5 +++--
> >  arch/arm64/kvm/mmio.c             |  3 ++-
> >  3 files changed, 12 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 41911585ae0c..4add6c27251f 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -122,7 +122,10 @@ struct kvm_arch {
> >  	 * should) opt in to this feature if KVM_CAP_ARM_NISV_TO_USER is
> >  	 * supported.
> >  	 */
> > -	bool return_nisv_io_abort_to_user;
> > +#define KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER	0
> > +	/* Memory Tagging Extension enabled for the guest */
> > +#define KVM_ARCH_FLAG_MTE_ENABLED			1
> > +	unsigned long flags;
> 
> One downside of packing all these together is that updating 'flags' now
> requires an atomic rmw sequence (i.e. set_bit()). Then again, that's
> probably for the best anyway given that kvm_vm_ioctl_enable_cap() looks
> like it doesn't hold any locks.

That, and these operations are supposed to be extremely rare anyway.

> 
> >  	/*
> >  	 * VM-wide PMU filter, implemented as a bitmap and big enough for
> > @@ -133,9 +136,6 @@ struct kvm_arch {
> >  
> >  	u8 pfr0_csv2;
> >  	u8 pfr0_csv3;
> > -
> > -	/* Memory Tagging Extension enabled for the guest */
> > -	bool mte_enabled;
> >  };
> >  
> >  struct kvm_vcpu_fault_info {
> > @@ -777,7 +777,9 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
> >  #define kvm_arm_vcpu_sve_finalized(vcpu) \
> >  	((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
> >  
> > -#define kvm_has_mte(kvm) (system_supports_mte() && (kvm)->arch.mte_enabled)
> > +#define kvm_has_mte(kvm)					\
> > +	(system_supports_mte() &&				\
> > +	 test_bit(KVM_ARCH_FLAG_MTE_ENABLED, &(kvm)->arch.flags))
> 
> Not an issue with this patch, but I just noticed that the
> system_supports_mte() check is redundant here as we only allow the flag to
> be set if that's already the case.

It allows us to save a memory access if system_supports_mte() is false
(it is eventually implemented as a static key). On the other hand,
there is so much inlining due to it being a non-final cap that we
probably lose on that too...

	M.

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list