[RFC PATCH 2/3] KVM: arm64: Make MTE_frac masking conditional on MTE capability

Marc Zyngier maz at kernel.org
Sun Apr 27 10:24:10 PDT 2025


On Mon, 14 Apr 2025 13:40:58 +0100,
Ben Horgan <ben.horgan at arm.com> wrote:
> 
> If MTE_frac is masked out unconditionally then the guest will always
> see ID_AA64PFR1_EL1_MTE_frac as 0. However, a value of 0 when
> ID_AA64PFR1_EL1_MTE is 2 indicates that MTE_ASYNC is supported. Hence, for
> a host with ID_AA64PFR1_EL1_MTE==2 and ID_AA64PFR1_EL1_MTE_frac==0xf
> (MTE_ASYNC unsupported) the guest would see MTE_ASYNC advertised as
> supported whilst the host does not support it. Hence, expose the sanitised
> value of MTE_frac to the guest and user-space.
> 
> As MTE_frac was previously hidden, always 0, and KVM must accept values
> from KVM provided by user-space, when ID_AA64PFR1_EL1.MTE is 2 allow
> user-space to set ID_AA64PFR1_EL1.MTE_frac to 0. However, ignore it to
> avoid incorrectly claiming hardware support for MTE_ASYNC in the guest.
> 
> Note that linux does not check the value of ID_AA64PFR1_EL1_MTE_frac and
> wrongly assumes that MTE async faults can be generated even on hardware
> that does nto support them. This issue is not addressed here.
> 
> Signed-off-by: Ben Horgan <ben.horgan at arm.com>
> ---
>  arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
>  1 file changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 005ad28f7306..9ae647082684 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1600,13 +1600,14 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
>  		val = sanitise_id_aa64pfr0_el1(vcpu, val);
>  		break;
>  	case SYS_ID_AA64PFR1_EL1:
> -		if (!kvm_has_mte(vcpu->kvm))
> +		if (!kvm_has_mte(vcpu->kvm)) {
>  			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);
> +			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac);
> +		}
>
>  		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME);
>  		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RNDR_trap);
>  		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI);
> -		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac);
>  		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_GCS);
>  		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_THE);
>  		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTEX);
> @@ -1953,11 +1954,32 @@ static int set_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
>  {
>  	u64 hw_val = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
>  	u64 mpam_mask = ID_AA64PFR1_EL1_MPAM_frac_MASK;
> +	u8 mte = SYS_FIELD_GET(ID_AA64PFR1_EL1, MTE, hw_val);
> +	u8 user_mte_frac = SYS_FIELD_GET(ID_AA64PFR1_EL1, MTE_frac, user_val);
>  
>  	/* See set_id_aa64pfr0_el1 for comment about MPAM */
>  	if ((hw_val & mpam_mask) == (user_val & mpam_mask))
>  		user_val &= ~ID_AA64PFR1_EL1_MPAM_frac_MASK;
>  
> +	/*
> +	 * Previously MTE_frac was hidden from guest. However, if the
> +	 * hardware supports MTE2 but not MTE_ASYM_FAULT then a value
> +	 * of 0 for this field indicates that the hardware supports
> +	 * MTE_ASYNC. Whereas, 0xf indicates MTE_ASYNC is not supported.
> +	 *
> +	 * As KVM must accept values from KVM provided by user-space,
> +	 * when ID_AA64PFR1_EL1.MTE is 2 allow user-space to set
> +	 * ID_AA64PFR1_EL1.MTE_frac to 0. However, ignore it to avoid
> +	 * incorrectly claiming hardware support for MTE_ASYNC in the
> +	 * guest.
> +	 */
> +
> +	if (mte == ID_AA64PFR1_EL1_MTE_MTE2 &&

The spec says that MTE_frac is valid if ID_AA64PFR1_EL1.MTE >= 0b0010.
Not strictly equal to 0b0010 (which represents MTE2). Crucially, MTE3
should receive the same treatment.

> +	    user_mte_frac == ID_AA64PFR1_EL1_MTE_frac_ASYNC) {
> +		user_val &= ~ID_AA64PFR1_EL1_MTE_frac_MASK;
> +		user_val |= hw_val & ID_AA64PFR1_EL1_MTE_frac_MASK;

This means you are unconditionally propagating what the HW supports,
which feels dodgy, specially considering that we don't know how
MTE_frac is going to evolve in the future.

I think you should limit the fix to the exact case we're mitigating
here, not blindly overwrite the guest's view with the HW's capability.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list