[PATCH 3/4] arm64: errata: Work around early CME DVMSync acknowledgement
Catalin Marinas
catalin.marinas at arm.com
Fri Mar 6 04:19:12 PST 2026
On Fri, Mar 06, 2026 at 12:00:30PM +0000, Catalin Marinas wrote:
> On Thu, Mar 05, 2026 at 02:32:11PM +0000, Will Deacon wrote:
> > On Mon, Mar 02, 2026 at 04:57:56PM +0000, Catalin Marinas wrote:
> > > +void sme_do_dvmsync(void)
> > > +{
> > > + /*
> > > + * This is called from the TLB maintenance functions after the DSB ISH
> > > + * to send hardware DVMSync message. If this CPU sees the mask as
> > > + * empty, the remote CPU executing sme_set_active() would have seen
> > > + * the DVMSync and no IPI required.
> > > + */
> > > + if (cpumask_empty(sme_active_cpus))
> > > + return;
> > > +
> > > + preempt_disable();
> > > + smp_call_function_many(sme_active_cpus, sme_dvmsync_ipi, NULL, true);
> > > + preempt_enable();
> > > +}
> >
> > Why do we care about all CPUs using SME, rather than limiting it to the
> > set of CPUs using SME with the mm we've invalidated? This looks like it
> > will result in unnecessary cross-calls when multiple tasks are using SME
> > (especially as the mm flag is only cleared on fork).
>
> Yes, it's a possibility but I traded it for simplicity. We also have the
> TTU case where we don't have an mm and we don't want to broadcast to all
> CPUs either, hence an sme_active_cpus mask. As I just replied on patch
> 2, for the TLB batching we wouldn't be able to use a cpumask in the
> batching structure since, per the ordering above, we need the DVMSync
> before checking if/where to send the IPI to.
>
> For the typical TLBI (not TTU), we can track a per-mm mask passed down
> to this function (I have patches doing this but it didn't make a
> significant difference in benchmarks).
Reusing the current mm_cpumask(), something like below. We could also
scrap the MMCF_SME_DVMSYNC flag, though we end up always call
sme_do_dvmsync() and checking the mask, probably more expensive than a
flag check.
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index e3ea0246a4f4..2c77ca41cb14 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -81,7 +81,7 @@ static inline unsigned long get_trans_granule(void)
}
#ifdef CONFIG_ARM64_ERRATUM_SME_DVMSYNC
-void sme_do_dvmsync(void);
+void sme_do_dvmsync(struct mm_struct *mm);
static inline void sme_dvmsync(struct mm_struct *mm)
{
@@ -90,7 +90,7 @@ static inline void sme_dvmsync(struct mm_struct *mm)
if (mm && !test_bit(ilog2(MMCF_SME_DVMSYNC), &mm->context.flags))
return;
- sme_do_dvmsync();
+ sme_do_dvmsync(mm);
}
#else
static inline void sme_dvmsync(struct mm_struct *mm) { }
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 90015fc29722..37e215cd0f39 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1378,6 +1378,7 @@ void sme_set_active(unsigned int cpu)
if (!test_bit(ilog2(MMCF_SME_DVMSYNC), ¤t->mm->context.flags))
set_bit(ilog2(MMCF_SME_DVMSYNC), ¤t->mm->context.flags);
+ cpumask_set_cpu(cpu, mm_cpumask(current->mm));
cpumask_set_cpu(cpu, sme_active_cpus);
/*
@@ -1398,6 +1399,7 @@ void sme_clear_active(unsigned int cpu)
* With SCTLR_EL1.IESB enabled, the SME memory transactions are
* completed on entering EL1.
*/
+ cpumask_clear_cpu(cpu, mm_cpumask(current->mm));
cpumask_clear_cpu(cpu, sme_active_cpus);
}
@@ -1410,19 +1412,25 @@ static void sme_dvmsync_ipi(void *unused)
*/
}
-void sme_do_dvmsync(void)
+void sme_do_dvmsync(struct mm_struct *mm)
{
/*
* This is called from the TLB maintenance functions after the DSB ISH
* to send hardware DVMSync message. If this CPU sees the mask as
* empty, the remote CPU executing sme_set_active() would have seen
* the DVMSync and no IPI required.
+ *
+ * When an mm is provided, limit the IPI to CPUs that are actively
+ * running SME code for that mm (recorded in mm_cpumask()), otherwise
+ * fall back to the global sme_active_cpus mask.
*/
- if (cpumask_empty(sme_active_cpus))
+ const struct cpumask *mask = mm ? mm_cpumask(mm) : sme_active_cpus;
+
+ if (cpumask_empty(mask))
return;
preempt_disable();
- smp_call_function_many(sme_active_cpus, sme_dvmsync_ipi, NULL, true);
+ smp_call_function_many(mask, sme_dvmsync_ipi, NULL, true);
preempt_enable();
}
More information about the linux-arm-kernel
mailing list