[PATCH v3 10/47] arm64: mpam: Initialise and context switch the MPAMSM_EL1 register

Gavin Shan gshan at redhat.com
Sun Jan 18 22:51:40 PST 2026


Hi Ben,

On 1/13/26 12:58 AM, Ben Horgan wrote:
> The MPAMSM_EL1 sets the MPAM labels, PMG and PARTID, for loads and stores
> generated by a shared SMCU. Disable the traps so the kernel can use it and
> set it to the same configuration as the per-EL cpu MPAM configuration.
> 
> If an SMCU is not shared with other cpus then it is implementation
> defined whether the configuration from MPAMSM_EL1 is used or that from
> the appropriate MPAMy_ELx. As we set the same, PMG_D and PARTID_D,
> configuration for MPAM0_EL1, MPAM1_EL1 and MPAMSM_EL1 the resulting
> configuration is the same regardless.
> 
> The range of valid configurations for the PARTID and PMG in MPAMSM_EL1 is
> not currently specified in Arm Architectural Reference Manual but the
> architect has confirmed that it is intended to be the same as that for the
> cpu configuration in the MPAMy_ELx registers.
> 
> Reviewed-by: Jonathan Cameron <jonathan.cameron at huawei.com>
> Signed-off-by: Ben Horgan <ben.horgan at arm.com>
> ---
> Changes since v2:
> Mention PMG_D and PARTID_D specifically int he commit message
> Add paragraph in commit message on range of MPAMSM_EL1 fields
> ---
>   arch/arm64/include/asm/el2_setup.h | 3 ++-
>   arch/arm64/include/asm/mpam.h      | 2 ++
>   arch/arm64/kernel/cpufeature.c     | 2 ++
>   arch/arm64/kernel/mpam.c           | 3 +++
>   4 files changed, 9 insertions(+), 1 deletion(-)
> 

One nitpick below...

Reviewed-by: Gavin Shan <gshan at redhat.com>

> diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
> index cacd20df1786..d37984c09799 100644
> --- a/arch/arm64/include/asm/el2_setup.h
> +++ b/arch/arm64/include/asm/el2_setup.h
> @@ -504,7 +504,8 @@
>   	check_override id_aa64pfr0, ID_AA64PFR0_EL1_MPAM_SHIFT, .Linit_mpam_\@, .Lskip_mpam_\@, x1, x2
>   
>   .Linit_mpam_\@:
> -	msr_s	SYS_MPAM2_EL2, xzr		// use the default partition
> +	mov	x0, #MPAM2_EL2_EnMPAMSM_MASK
> +	msr_s	SYS_MPAM2_EL2, x0		// use the default partition,
>   						// and disable lower traps
>   	mrs_s	x0, SYS_MPAMIDR_EL1
>   	tbz	x0, #MPAMIDR_EL1_HAS_HCR_SHIFT, .Lskip_mpam_\@  // skip if no MPAMHCR reg
> diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h
> index 14011e5970ce..7b3d3abad162 100644
> --- a/arch/arm64/include/asm/mpam.h
> +++ b/arch/arm64/include/asm/mpam.h
> @@ -53,6 +53,8 @@ static inline void mpam_thread_switch(struct task_struct *tsk)
>   		return;
>   
>   	write_sysreg_s(regval, SYS_MPAM1_EL1);
> +	if (system_supports_sme())
> +		write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), SYS_MPAMSM_EL1);
>   	isb();
>   
>   	/* Synchronising the EL0 write is left until the ERET to EL0 */
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 0cdfb3728f43..2ede543b3eeb 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2491,6 +2491,8 @@ cpu_enable_mpam(const struct arm64_cpu_capabilities *entry)
>   		regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
>   
>   	write_sysreg_s(regval, SYS_MPAM1_EL1);
> +	if (system_supports_sme())
> +		write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), SYS_MPAMSM_EL1);
>   	isb();
>   
>   	/* Synchronising the EL0 write is left until the ERET to EL0 */
> diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c
> index dbe0a2d05abb..6ce4a36469ce 100644
> --- a/arch/arm64/kernel/mpam.c
> +++ b/arch/arm64/kernel/mpam.c
> @@ -28,6 +28,9 @@ static int mpam_pm_notifier(struct notifier_block *self,
>   		 */
>   		regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
>   		write_sysreg_s(regval, SYS_MPAM1_EL1);
> +		if (system_supports_sme())
> +			write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D),
> +				       SYS_MPAMSM_EL1);

{ } is missed here.

>   		isb();
>   
>   		write_sysreg_s(regval, SYS_MPAM0_EL1);

Thanks,
Gavin




More information about the linux-arm-kernel mailing list