[PATCH 3/4] arm64: errata: Work around early CME DVMSync acknowledgement

Will Deacon will at kernel.org
Thu Mar 12 07:55:15 PDT 2026


On Tue, Mar 10, 2026 at 03:35:19PM +0000, Catalin Marinas wrote:
> Thanks Vladimir,
> 
> On Mon, Mar 09, 2026 at 10:13:20AM +0000, Vladimir Murzin wrote:
> > On 3/6/26 12:00, Catalin Marinas wrote:
> > >>> @@ -1358,6 +1360,85 @@ void do_sve_acc(unsigned long esr, struct pt_regs *regs)
> > >>>  	put_cpu_fpsimd_context();
> > >>>  }
> > >>>  
> > >>> +#ifdef CONFIG_ARM64_ERRATUM_SME_DVMSYNC
> > >>> +
> > >>> +/*
> > >>> + * SME/CME erratum handling
> > >>> + */
> > >>> +static cpumask_var_t sme_dvmsync_cpus;
> > >>> +static cpumask_var_t sme_active_cpus;
> > >>> +
> > >>> +void sme_set_active(unsigned int cpu)
> > >>> +{
> > >>> +	if (!cpus_have_final_cap(ARM64_WORKAROUND_SME_DVMSYNC))
> > >>> +		return;
> > >>> +	if (!cpumask_test_cpu(cpu, sme_dvmsync_cpus))
> > >>> +		return;
> > >>> +
> > >>> +	if (!test_bit(ilog2(MMCF_SME_DVMSYNC), &current->mm->context.flags))
> > >>> +		set_bit(ilog2(MMCF_SME_DVMSYNC), &current->mm->context.flags);
> > >>> +
> > >>> +	cpumask_set_cpu(cpu, sme_active_cpus);
> > >>> +
> > >>> +	/*
> > >>> +	 * Ensure subsequent (SME) memory accesses are observed after the
> > >>> +	 * cpumask and the MMCF_SME_DVMSYNC flag setting.
> > >>> +	 */
> > >>> +	smp_mb();
> > >>
> > >> I can't convince myself that a DMB is enough here, as the whole issue
> > >> is that the SME memory accesses can be observed _after_ the TLB
> > >> invalidation. I'd have thought we'd need a DSB to ensure that the flag
> > >> updates are visible before the exception return.
> > > 
> > > This is only to ensure that the sme_active_cpus mask is observed before
> > > any SME accesses. The mask is later used to decide whether to send the
> > > IPI. We have something like this:
> > > 
> > > P0
> > > 	STSET	[sme_active_cpus]
> > > 	DMB
> > > 	SME access to [addr]
> > > 
> > > P1
> > > 	TLBI	[addr]
> > > 	DSB
> > > 	LDR	[sme_active_cpus]
> > > 	CBZ	out
> > > 	Do IPI
> > > out:
> > > 
> > > If P1 did not observe the STSET to [sme_active_cpus], P0 should have
> > > received and acknowledged the DVMSync before the STSET. Is your concern
> > > that P1 can observe the subsequent SME access but not the STSET?
> > > 
> > > No idea whether herd can model this (I only put this in TLA+ for the
> > > main logic check but it doesn't do subtle memory ordering).
> > 
> > JFYI, herd support for SME is still work-in-progress (specifically it misses
> > updates in cat), yet it can model VMSA.
> > 
> > IIUC, expectation here is that either
> > - P1 observes sme_active_cpus, so we have to do_IPI or
> > - P0 observes TLBI (say shutdown, so it must fault)
> > 
> > anything else is unexpected/forbidden.
> > 
> > AArch64 A
> > variant=vmsa
> > {
> >  int x=0;
> >  int active=0;
> > 
> >  0:X1=active;
> >  0:X3=x;
> > 
> >  1:X0=(valid:0);
> >  1:X1=PTE(x);
> >  1:X2=x;
> >  1:X3=active;
> >  
> > }
> >  P0                                 | P1                                            ;
> >  MOV W0,#1                          | STR X0,[X1]                                   ;
> >  STR W0,[X1] (* sme_active_cpus  *) | DSB ISH                                       ;
> >  DMB SY                             | LSR X9,X2,#12                                 ;
> >  LDR W2,[X3] (* access to [addr] *) | TLBI VAAE1IS,X9 (* [addr] *)                  ;
> >                                     | DSB ISH                                       ;
> >                                     | LDR W4,[X3]     (* sme_active_cpus *)         ;
> > 
> > exists ~(1:X4=1 \/ fault(P0,x))
> > 
> > Is that correct understanding? Have I missed anything?
> 
> Yes, I think that's correct. Another tweak specific to this erratum
> would be for P1 to do a store to x via another mapping after the
> TLBI+DSB and the P0 load should not see it.
> 
> Even with the CPU erratum, if the P1 DVMSync is received/acknowledged by
> P0 before its STR to sme_active_cpus, I don't see how the subsequent SME
> load would overtake the STR given the DMB. The erratum messed up the
> DVMSync acknowledgement, not the barriers.

I'm still finding this hard to reason about.

Why can't:

1. P0 translates its SME load and puts the valid translation into its TLB
2. P1 runs to completion, sees sme_active_cpus as 0 and so doesn't IPI
3. P0 writes to sme_active_cpus and then does the SME load using the
   translation from (1)

I guess it's diving into ugly corners of what the erratum actually is...

Will



More information about the linux-arm-kernel mailing list