[PATCH v4 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers

Gavin Shan gshan at redhat.com
Tue Oct 8 22:53:46 PDT 2024


On 10/4/24 9:07 PM, Joey Gouly wrote:
> From: James Morse <james.morse at arm.com>
> 
> commit 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits in
> ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to guests,
> but didn't add trap handling.
> 
> If you are unlucky, this results in an MPAM aware guest being delivered
> an undef during boot. The host prints:
> | kvm [97]: Unsupported guest sys_reg access at: ffff800080024c64 [00000005]
> | { Op0( 3), Op1( 0), CRn(10), CRm( 5), Op2( 0), func_read },
> 
> Which results in:
> | Internal error: Oops - Undefined instruction: 0000000002000000 [#1] PREEMPT SMP
> | Modules linked in:
> | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.6.0-rc7-00559-gd89c186d50b2 #14616
> | Hardware name: linux,dummy-virt (DT)
> | pstate: 00000005 (nzcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> | pc : test_has_mpam+0x18/0x30
> | lr : test_has_mpam+0x10/0x30
> | sp : ffff80008000bd90
> ...
> | Call trace:
> |  test_has_mpam+0x18/0x30
> |  update_cpu_capabilities+0x7c/0x11c
> |  setup_cpu_features+0x14/0xd8
> |  smp_cpus_done+0x24/0xb8
> |  smp_init+0x7c/0x8c
> |  kernel_init_freeable+0xf8/0x280
> |  kernel_init+0x24/0x1e0
> |  ret_from_fork+0x10/0x20
> | Code: 910003fd 97ffffde 72001c00 54000080 (d538a500)
> | ---[ end trace 0000000000000000 ]---
> | Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
> | ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---
> 
> Add the support to enable the traps, and handle the three guest accessible
> registers by injecting an UNDEF. This stops KVM from spamming the host
> log, but doesn't yet hide the feature from the id registers.
> 
> With MPAM v1.0 we can trap the MPAMIDR_EL1 register only if
> ARM64_HAS_MPAM_HCR, with v1.1 an additional MPAM2_EL2.TIDR bit traps
> MPAMIDR_EL1 on platforms that don't have MPAMHCR_EL2. Enable one of
> these if either is supported. If neither is supported, the guest can
> discover that the CPU has MPAM support, and how many PARTID etc the
> host has ... but it can't influence anything, so its harmless.
> 
> Fixes: 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register")
> CC: Anshuman Khandual <anshuman.khandual at arm.com>
> Link: https://lore.kernel.org/linux-arm-kernel/20200925160102.118858-1-james.morse@arm.com/
> Signed-off-by: James Morse <james.morse at arm.com>
> Signed-off-by: Joey Gouly <joey.gouly at arm.com>
> ---
>   arch/arm64/include/asm/cpufeature.h     |  2 +-
>   arch/arm64/include/asm/kvm_arm.h        |  1 +
>   arch/arm64/include/asm/mpam.h           |  2 +-
>   arch/arm64/kernel/image-vars.h          |  5 ++++
>   arch/arm64/kvm/hyp/include/hyp/switch.h | 32 +++++++++++++++++++++++++
>   arch/arm64/kvm/sys_regs.c               | 11 +++++++++
>   6 files changed, 51 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 1abe55e62e63..985d966787b1 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -845,7 +845,7 @@ static inline bool system_supports_poe(void)
>   		alternative_has_cap_unlikely(ARM64_HAS_S1POE);
>   }
>   
> -static inline bool cpus_support_mpam(void)
> +static __always_inline bool cpus_support_mpam(void)
>   {
>   	return alternative_has_cap_unlikely(ARM64_MPAM);
>   }

The changes belong to PATCH[3/7] if I'm correct.

> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 109a85ee6910..16afb7a79b15 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -103,6 +103,7 @@
>   #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
>   
>   #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En | HCRX_EL2_EnFPM)
> +#define MPAMHCR_HOST_FLAGS	0
>   
>   /* TCR_EL2 Registers bits */
>   #define TCR_EL2_DS		(1UL << 32)
> diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h
> index d3ef0b5cd456..3ffe0f34ffff 100644
> --- a/arch/arm64/include/asm/mpam.h
> +++ b/arch/arm64/include/asm/mpam.h
> @@ -15,7 +15,7 @@
>   DECLARE_STATIC_KEY_FALSE(arm64_mpam_has_hcr);
>   
>   /* check whether all CPUs have MPAM virtualisation support */
> -static inline bool mpam_cpus_have_mpam_hcr(void)
> +static __always_inline bool mpam_cpus_have_mpam_hcr(void)
>   {
>   	if (IS_ENABLED(CONFIG_ARM64_MPAM))
>   		return static_branch_unlikely(&arm64_mpam_has_hcr);

Save as above, the changes belong to PATCH[3/7].

Thanks,
Gavin




More information about the linux-arm-kernel mailing list