[PATCH] arch/cacheflush: Introduce flush_all_caches()

Dan Williams dan.j.williams at intel.com
Tue Aug 16 09:50:45 PDT 2022


Peter Zijlstra wrote:
> On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
> > diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
> > index b192d917a6d0..ce2ec9556093 100644
> > --- a/arch/x86/include/asm/cacheflush.h
> > +++ b/arch/x86/include/asm/cacheflush.h
> > @@ -10,4 +10,7 @@
> > 
> >  void clflush_cache_range(void *addr, unsigned int size);
> > 
> > +#define flush_all_caches() \
> > +	do { wbinvd_on_all_cpus(); } while(0)
> > +
> 
> This is horrific... we've done our utmost best to remove all WBINVD
> usage and here you're adding it back in the most horrible form possible
> ?!?
> 
> Please don't do this, do *NOT* use WBINVD.

Unfortunately there are a few good options here, and the changelog did
not make clear that this is continuing legacy [1], not adding new wbinvd
usage.

The functionality this is enabling is to be able to instantaneously
secure erase potentially terabytes of memory at once and the kernel
needs to be sure that none of the data from before the secure is still
present in the cache. It is also used when unlocking a memory device
where speculative reads and firmware accesses could have cached poison
from before the device was unlocked.

This capability is typically only used once per-boot (for unlock), or
once per bare metal provisioning event (secure erase), like when handing
off the system to another tenant. That small scope plus the fact that
none of this is available to a VM limits the potential damage. So,
similar to the mitigation we did in [2] that did not kill off wbinvd
completely, this is limited to specific scenarios and should be disabled
in any scenario where wbinvd is painful / forbidden.

[1]: 4c6926a23b76 ("acpi/nfit, libnvdimm: Add unlock of nvdimm support for Intel DIMMs")
[2]: e2efb6359e62 ("ACPICA: Avoid cache flush inside virtual machines")



More information about the linux-arm-kernel mailing list