[PATCH v2 02/25] KVM: arm64: Add feature checking helpers

Marc Zyngier maz at kernel.org
Sun Feb 4 03:44:58 PST 2024


On Sun, 04 Feb 2024 11:08:45 +0000,
Marc Zyngier <maz at kernel.org> wrote:
> 
> On Fri, 02 Feb 2024 17:13:07 +0000,
> Suzuki K Poulose <suzuki.poulose at arm.com> wrote:
> > 
> > Hi Marc,
> > 
> > On 30/01/2024 20:45, Marc Zyngier wrote:
> > > In order to make it easier to check whether a particular feature
> > > is exposed to a guest, add a new set of helpers, with kvm_has_feat()
> > > being the most useful.
> > > 
> > > Let's start making use of them in the PMU code (courtesy of Oliver).
> > > Follow-up work will intricude additional use patterns.
> > 
> > I think there is a bit of inconsistency in the macros for signed
> > and unsigned. The unsigned fields are extracted (i.e., as if they
> > were shifted to bit 0). But the signed fields are not shifted
> > completely to bit '0' (in fact to different positions) and eventually
> > we compare wrong things.
> > 
> > Using ID_AA64PFR0_EL1, fld=EL2, val=IMP for unsigned and
> > ID_AA64PFR0_EL1, AdvSIMD, NI for signed.
> > 
> > > 
> > > Co-developed--by: Oliver Upton <oliver.upton at linux.dev>
> > > Signed-off-by: Oliver Upton <oliver.upton at linux.dev>
> > > Signed-off-by: Marc Zyngier <maz at kernel.org>
> > > ---
> > >   arch/arm64/include/asm/kvm_host.h | 53 +++++++++++++++++++++++++++++++
> > >   arch/arm64/kvm/pmu-emul.c         | 11 ++++---
> > >   arch/arm64/kvm/sys_regs.c         |  6 ++--
> > >   include/kvm/arm_pmu.h             | 11 -------
> > >   4 files changed, 61 insertions(+), 20 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index 21c57b812569..c0cf9c5f5e8d 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -1233,4 +1233,57 @@ static inline void kvm_hyp_reserve(void) { }
> > >   void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> > >   bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
> > >   +#define __expand_field_sign_unsigned(id, fld, val)
> > > \
> > > +	((u64)(id##_##fld##_##val))
> > 
> > For unsigned features we get the actual "field" value, not the value
> > in position. e.g,: ID_AA64PFR0_EL1_EL2_IMP = (0x1)
> > 
> > > +
> > > +#define __expand_field_sign_signed(id, fld, val)			\
> > > +	({								\
> > > +		s64 __val = id##_##fld##_##val;				\
> > > +		__val <<= 64 - id##_##fld##_WIDTH;			\
> > > +		__val >>= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH;	\
> > 
> > But for signed fields, we shift them back into the "position" as in
> > the ID_REG. e.g.,
> > 
> > ID_AA64PFR0_EL1, AdvSIMD, NI we get:
> > 
> > 	__val = ID_AA64PFR0_EL1_AdvSIMD_NI; /* = 0xf */
> > 	__val <<= 64 - 4;		/* 0xf0_00_00_00_00_00_00_00 */
> > 	__val >>= 64 - 20 - 4;		/* 0xff_ff_ff_ff_ff_f0_00_00 */
> > 
> > I think the last line instead should be:
> > 	__val >>= 64 - id##_##fld##_WIDTH;
> 
> Huh, you're absolutely right.
> 
> > 
> > > +									\
> > > +		__val;							\
> > > +	})
> > > +
> > > +#define expand_field_sign(id, fld, val)					\
> > > +	(id##_##fld##_SIGNED ?						\
> > > +	 __expand_field_sign_signed(id, fld, val) :			\
> > > +	 __expand_field_sign_unsigned(id, fld, val))
> > > +
> > > +#define get_idreg_field_unsigned(kvm, id, fld)				\
> > > +	({								\
> > > +		u64 __val = IDREG(kvm, SYS_##id);			\
> > > +		__val &= id##_##fld##_MASK;				\
> > > +		__val >>= id##_##fld##_SHIFT;				\
> > > +									\
> > 
> > We extract the field value for unsigned field, i.e., shifted to bit"0"
> > and that matches the expand_field_sign().
> > 
> > > +		__val;							\
> > > +	})
> > > +
> > > +#define get_idreg_field_signed(kvm, id, fld)				\
> > > +	({								\
> > > +		s64 __val = IDREG(kvm, SYS_##id);			\
> > > +		__val <<= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH;	\
> > > +		__val >>= id##_##fld##_SHIFT;				\
> > 
> > However, here we get (assuming value ID_AA64PFR0_EL1_ASIMD = 0xf, and
> > all other fields are 0 for clarity)
> > 	
> > 	__val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); = 0xf0_00_00; 	/* 0xf << 20 */
> > 	__val <<= 64 - 20 - 4;	/* = 0xf0_00_00_00_00_00_00_00 */
> > 	__val >>= 20;		/* = 0xff_ff_ff_00_00_00_00_00 */
> 
> Gah... again!
> 
> >
> > Thus they don;t match. Instead the last line should be :
> > 
> > 	__val >>= id##_##fld##_WIDTH;
> 
> Shouldn't this be (64 - WIDTH) instead, since we want the value to be
> shifted to bit 0? Otherwise, you get 0xff_00_00_00_00_00_00_00 (as per
> your example).
> 
> Thanks a lot for spotting those, much appreciated.

And since it is now obvious to anyone that I can't shift properly,
let's just not do that. As it turns out, the kernel already has the
required macros, and I can stop being clever with these:

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ea8acc1914aa..3457fa313bad 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1290,11 +1290,8 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
 
 #define __expand_field_sign_signed(id, fld, val)			\
 	({								\
-		s64 __val = id##_##fld##_##val;				\
-		__val <<= 64 - id##_##fld##_WIDTH;			\
-		__val >>= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH;	\
-									\
-		__val;							\
+		u64 __val = id##_##fld##_##val;				\
+		sign_extend64(__val, id##_##fld##_WIDTH - 1);		\
 	})
 
 #define expand_field_sign(id, fld, val)					\
@@ -1313,11 +1310,9 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
 
 #define get_idreg_field_signed(kvm, id, fld)				\
 	({								\
-		s64 __val = IDREG(kvm, SYS_##id);			\
-		__val <<= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH;	\
-		__val >>= id##_##fld##_SHIFT;				\
-									\
-		__val;							\
+		u64 __val = IDREG(kvm, SYS_##id);			\
+		__val = FIELD_GET(id##_##fld##_MASK, __val);		\
+		sign_extend64(__val, id##_##fld##_WIDTH - 1);		\
 	})
 
 #define get_idreg_field_enum(kvm, id, fld)				\

Thoughts?

	M.

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list