[PATCH v3 07/47] arm64: mpam: Re-initialise MPAM regs when CPU comes online
Catalin Marinas
catalin.marinas at arm.com
Thu Jan 15 10:14:09 PST 2026
On Mon, Jan 12, 2026 at 04:58:34PM +0000, Ben Horgan wrote:
> From: James Morse <james.morse at arm.com>
>
> Now that the MPAM system registers are expected to have values that change,
> reprogram them based on the previous value when a CPU is brought online.
>
> Previously MPAM's 'default PARTID' of 0 was always used for MPAM in
> kernel-space as this is the PARTID that hardware guarantees to
> reset. Because there are a limited number of PARTID, this value is exposed
> to user-space, meaning resctrl changes to the resctrl default group would
> also affect kernel threads. Instead, use the task's PARTID value for
> kernel work on behalf of user-space too. The default of 0 is kept for both
> user-space and kernel-space when MPAM is not enabled.
>
> Reviewed-by: Jonathan Cameron <jonathan.cameron at huawei.com>
> Signed-off-by: James Morse <james.morse at arm.com>
> Signed-off-by: Ben Horgan <ben.horgan at arm.com>
> ---
> Changes since rfc:
> CONFIG_MPAM -> CONFIG_ARM64_MPAM
> Check mpam_enabled
> Comment about relying on ERET for synchronisation
> Update commit message
> ---
> arch/arm64/kernel/cpufeature.c | 19 ++++++++++++-------
> 1 file changed, 12 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index c840a93b9ef9..0cdfb3728f43 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -86,6 +86,7 @@
> #include <asm/kvm_host.h>
> #include <asm/mmu.h>
> #include <asm/mmu_context.h>
> +#include <asm/mpam.h>
> #include <asm/mte.h>
> #include <asm/hypervisor.h>
> #include <asm/processor.h>
> @@ -2483,13 +2484,17 @@ test_has_mpam(const struct arm64_cpu_capabilities *entry, int scope)
> static void
> cpu_enable_mpam(const struct arm64_cpu_capabilities *entry)
> {
> - /*
> - * Access by the kernel (at EL1) should use the reserved PARTID
> - * which is configured unrestricted. This avoids priority-inversion
> - * where latency sensitive tasks have to wait for a task that has
> - * been throttled to release the lock.
> - */
> - write_sysreg_s(0, SYS_MPAM1_EL1);
Is this comment about priority inversion no longer valid? I see thread
switching sets the same value for both MPAM0 and MPAM1 registers but I
couldn't find an explanation why this is now better when it wasn't
before.
MPAM1 will also be inherited by IRQ handlers AFAICT.
> + int cpu = smp_processor_id();
> + u64 regval = 0;
> +
> + if (IS_ENABLED(CONFIG_ARM64_MPAM) && static_branch_likely(&mpam_enabled))
> + regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
> +
> + write_sysreg_s(regval, SYS_MPAM1_EL1);
> + isb();
> +
> + /* Synchronising the EL0 write is left until the ERET to EL0 */
> + write_sysreg_s(regval, SYS_MPAM0_EL1);
I mentioned before, is it worth waiting until ERET?
Related to this, do LDTR/STTR use MPAM0 or MPAM1? I couldn't figure out
from the Arm ARM. If they use MPAM0, then we need the ISB early for the
uaccess routines, at least in the thread switching path (an earlier
patch).
--
Catalin
More information about the linux-arm-kernel
mailing list