[PATCH v3 26/29] arm_mpam: Use long MBWU counters if supported
Jonathan Cameron
jonathan.cameron at huawei.com
Fri Oct 24 11:29:57 PDT 2025
On Fri, 17 Oct 2025 18:56:42 +0000
James Morse <james.morse at arm.com> wrote:
> From: Rohit Mathew <rohit.mathew at arm.com>
>
> Now that the larger counter sizes are probed, make use of them.
>
> Callers of mpam_msmon_read() may not know (or care!) about the different
> counter sizes. Allow them to specify mpam_feat_msmon_mbwu and have the
> driver pick the counter to use.
>
> Only 32bit accesses to the MSC are required to be supported by the
> spec, but these registers are 64bits. The lower half may overflow
> into the higher half between two 32bit reads. To avoid this, use
> a helper that reads the top half multiple times to check for overflow.
>
> Signed-off-by: Rohit Mathew <rohit.mathew at arm.com>
> [morse: merged multiple patches from Rohit, added explicit counter selection ]
> Signed-off-by: James Morse <james.morse at arm.com>
> Reviewed-by: Ben Horgan <ben.horgan at arm.com>
> Reviewed-by: Jonathan Cameron <jonathan.cameron at huawei.com>
> Reviewed-by: Fenghua Yu <fenghuay at nvidia.com>
> Tested-by: Fenghua Yu <fenghuay at nvidia.com>
A few tiny things on a fresh look.
> +static u64 mpam_msc_read_mbwu_l(struct mpam_msc *msc)
> +{
> + int retry = 3;
> + u32 mbwu_l_low;
> + u64 mbwu_l_high1, mbwu_l_high2;
> +
> + mpam_mon_sel_lock_held(msc);
> +
> + WARN_ON_ONCE((MSMON_MBWU_L + sizeof(u64)) > msc->mapped_hwpage_sz);
> + WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &msc->accessibility));
> +
> + mbwu_l_high2 = __mpam_read_reg(msc, MSMON_MBWU_L + 4);
> + do {
> + mbwu_l_high1 = mbwu_l_high2;
> + mbwu_l_low = __mpam_read_reg(msc, MSMON_MBWU_L);
> + mbwu_l_high2 = __mpam_read_reg(msc, MSMON_MBWU_L + 4);
> +
> + retry--;
> + } while (mbwu_l_high1 != mbwu_l_high2 && retry > 0);
Just carrying on if it tore repeatedly without screaming seems unwise...
I can't see it actually happening more than once but still seems like
we'd want to know if it did.
> +
> + if (mbwu_l_high1 == mbwu_l_high2)
> + return (mbwu_l_high1 << 32) | mbwu_l_low;
> + return MSMON___NRDY_L;
> +}
> static void write_msmon_ctl_flt_vals(struct mon_read *m, u32 ctl_val,
> @@ -978,10 +1027,15 @@ static void write_msmon_ctl_flt_vals(struct mon_read *m, u32 ctl_val,
> mpam_write_monsel_reg(msc, CSU, 0);
> mpam_write_monsel_reg(msc, CFG_CSU_CTL, ctl_val | MSMON_CFG_x_CTL_EN);
> break;
> - case mpam_feat_msmon_mbwu:
> + case mpam_feat_msmon_mbwu_44counter:
> + case mpam_feat_msmon_mbwu_63counter:
> + mpam_msc_zero_mbwu_l(m->ris->vmsc->msc);
> + fallthrough;
> + case mpam_feat_msmon_mbwu_31counter:
> mpam_write_monsel_reg(msc, CFG_MBWU_FLT, flt_val);
> mpam_write_monsel_reg(msc, CFG_MBWU_CTL, ctl_val);
> mpam_write_monsel_reg(msc, MBWU, 0);
> +
Stray change to clean up (push to original patch).
> mpam_write_monsel_reg(msc, CFG_MBWU_CTL, ctl_val | MSMON_CFG_x_CTL_EN);
>
> mbwu_state = &m->ris->mbwu_state[m->ctx->mon];
> @@ -993,10 +1047,19 @@ static void write_msmon_ctl_flt_vals(struct mon_read *m, u32 ctl_val,
> }
> }
More information about the linux-arm-kernel
mailing list