[PATCH v3] arm64: Add override for MPAM
Will Deacon
will at kernel.org
Thu May 29 02:45:16 PDT 2025
On Fri, May 16, 2025 at 06:25:56PM +0800, Xi Ruoyao wrote:
> As the message of the commit 09e6b306f3ba ("arm64: cpufeature: discover
> CPU support for MPAM") already states, if a buggy firmware fails to
> either enable MPAM or emulate the trap as if it were disabled, the
> kernel will just fail to boot. While upgrading the firmware should be
> the best solution, we have some hardware of which the vendor have made
> no response 2 months after we requested a firmware update. Allow
> overriding it so our devices don't become some e-waste.
>
> Cc: James Morse <james.morse at arm.com>
> Cc: Marc Zyngier <maz at kernel.org>
> Cc: Shameer Kolothum <shameerali.kolothum.thodi at huawei.com>
> Cc: Mingcong Bai <jeffbai at aosc.io>
> Tested-by: Shaopeng Tan <tan.shaopeng at fujitsu.com>
> Tested-by: Ben Horgan <ben.horgan at arm.com>
> Signed-off-by: Xi Ruoyao <xry111 at xry111.site>
> ---
>
> [v2]->v3:
> - Fix typos in the subject and a comment.
> - Remove a useless #include directive.
>
> [v1]->v2:
> - Handle the override and initialize EL2 mpam in finalise_el2_state
> - Move info->mpamidr assignment to {init,update}_cpu_features
>
> [v1]: https://lore.kernel.org/linux-arm-kernel/20250401055650.22542-1-xry111@xry111.site/
>
> .../admin-guide/kernel-parameters.txt | 3 +++
> arch/arm64/include/asm/el2_setup.h | 24 ++++++++-----------
> arch/arm64/kernel/cpufeature.c | 7 ++++--
> arch/arm64/kernel/cpuinfo.c | 7 ++++--
> arch/arm64/kernel/pi/idreg-override.c | 2 ++
> 5 files changed, 25 insertions(+), 18 deletions(-)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 8f75ec177399..0bfcbeab7a3b 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -458,6 +458,9 @@
> arm64.nomops [ARM64] Unconditionally disable Memory Copy and Memory
> Set instructions support
>
> + arm64.nompam [ARM64] Unconditionally disable Memory Partitioning And
> + Monitoring support
> +
> arm64.nomte [ARM64] Unconditionally disable Memory Tagging Extension
> support
>
> diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
> index d40e427ddad9..2e6b9086efc5 100644
> --- a/arch/arm64/include/asm/el2_setup.h
> +++ b/arch/arm64/include/asm/el2_setup.h
> @@ -294,19 +294,6 @@
> .Lskip_gcs_\@:
> .endm
>
> -.macro __init_el2_mpam
> - /* Memory Partitioning And Monitoring: disable EL2 traps */
> - mrs x1, id_aa64pfr0_el1
> - ubfx x0, x1, #ID_AA64PFR0_EL1_MPAM_SHIFT, #4
> - cbz x0, .Lskip_mpam_\@ // skip if no MPAM
> - msr_s SYS_MPAM2_EL2, xzr // use the default partition
> - // and disable lower traps
> - mrs_s x0, SYS_MPAMIDR_EL1
> - tbz x0, #MPAMIDR_EL1_HAS_HCR_SHIFT, .Lskip_mpam_\@ // skip if no MPAMHCR reg
> - msr_s SYS_MPAMHCR_EL2, xzr // clear TRAP_MPAMIDR_EL1 -> EL2
> -.Lskip_mpam_\@:
> -.endm
> -
> /**
> * Initialize EL2 registers to sane values. This should be called early on all
> * cores that were booted in EL2. Note that everything gets initialised as
> @@ -324,7 +311,6 @@
> __init_el2_stage2
> __init_el2_gicv3
> __init_el2_hstr
> - __init_el2_mpam
> __init_el2_nvhe_idregs
> __init_el2_cptr
> __init_el2_fgt
> @@ -371,6 +357,16 @@
> #endif
>
> .macro finalise_el2_state
> + check_override id_aa64pfr0, ID_AA64PFR0_EL1_MPAM_SHIFT, .Linit_mpam_\@, .Lskip_mpam_\@, x1, x2
> +
> +.Linit_mpam_\@:
> + msr_s SYS_MPAM2_EL2, xzr // use the default partition
> + // and disable lower traps
> + mrs_s x0, SYS_MPAMIDR_EL1
> + tbz x0, #MPAMIDR_EL1_HAS_HCR_SHIFT, .Lskip_mpam_\@ // skip if no MPAMHCR reg
> + msr_s SYS_MPAMHCR_EL2, xzr // clear TRAP_MPAMIDR_EL1 -> EL2
> +
> +.Lskip_mpam_\@:
> check_override id_aa64pfr0, ID_AA64PFR0_EL1_SVE_SHIFT, .Linit_sve_\@, .Lskip_sve_\@, x1, x2
>
> .Linit_sve_\@: /* SVE register access */
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 4c46d80aa64b..7b8c998a0466 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1198,8 +1198,10 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
> cpacr_restore(cpacr);
> }
>
> - if (id_aa64pfr0_mpam(info->reg_id_aa64pfr0))
> + if (id_aa64pfr0_mpam(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) {
> + info->reg_mpamidr = read_cpuid(MPAMIDR_EL1);
> init_cpu_ftr_reg(SYS_MPAMIDR_EL1, info->reg_mpamidr);
> + }
>
> if (id_aa64pfr1_mte(info->reg_id_aa64pfr1))
> init_cpu_ftr_reg(SYS_GMID_EL1, info->reg_gmid);
> @@ -1450,7 +1452,8 @@ void update_cpu_features(int cpu,
> cpacr_restore(cpacr);
> }
>
> - if (id_aa64pfr0_mpam(info->reg_id_aa64pfr0)) {
> + if (id_aa64pfr0_mpam(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) {
> + info->reg_mpamidr = read_cpuid(MPAMIDR_EL1);
> taint |= check_update_ftr_reg(SYS_MPAMIDR_EL1, cpu,
> info->reg_mpamidr, boot->reg_mpamidr);
> }
> diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
> index 285d7d538342..15d39fbc6085 100644
> --- a/arch/arm64/kernel/cpuinfo.c
> +++ b/arch/arm64/kernel/cpuinfo.c
> @@ -494,8 +494,11 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
> if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
> __cpuinfo_store_cpu_32bit(&info->aarch32);
>
> - if (id_aa64pfr0_mpam(info->reg_id_aa64pfr0))
> - info->reg_mpamidr = read_cpuid(MPAMIDR_EL1);
> + /*
> + * info->reg_mpamidr deferred to {init,update}_cpu_features because we
> + * don't want to read it (and trigger a trap on buggy firmware) if
> + * using an aa64pfr0_el1 override to unconditionally disable MPAM.
> + */
Bah, and you have to do that because MPAMIDR_EL1 accesses can trap to
EL3, which is weirdly different to e.g. the SMIDR_EL1 accesses made by
the SME code immediately below. Great stuff.
Anyway, looks ok to me. I can take it as a fix if Marc is happy with it.
Will
More information about the linux-arm-kernel
mailing list