[PATCH 4.14 87/89] x86/mm: Rework wbinvd, hlt operation in stop_this_cpu()

Greg Kroah-Hartman gregkh at linuxfoundation.org
Mon Jan 22 00:46:07 PST 2018


4.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Tom Lendacky <thomas.lendacky at amd.com>

commit f23d74f6c66c3697e032550eeef3f640391a3a7d upstream.

Some issues have been reported with the for loop in stop_this_cpu() that
issues the 'wbinvd; hlt' sequence.  Reverting this sequence to halt()
has been shown to resolve the issue.

However, the wbinvd is needed when running with SME.  The reason for the
wbinvd is to prevent cache flush races between encrypted and non-encrypted
entries that have the same physical address.  This can occur when
kexec'ing from memory encryption active to inactive or vice-versa.  The
important thing is to not have outside of kernel text memory references
(such as stack usage), so the usage of the native_*() functions is needed
since these expand as inline asm sequences.  So instead of reverting the
change, rework the sequence.

Move the wbinvd instruction outside of the for loop as native_wbinvd()
and make its execution conditional on X86_FEATURE_SME.  In the for loop,
change the asm 'wbinvd; hlt' sequence back to a halt sequence but use
the native_halt() call.

Fixes: bba4ed011a52 ("x86/mm, kexec: Allow kexec to be used with SME")
Reported-by: Dave Young <dyoung at redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky at amd.com>
Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
Tested-by: Dave Young <dyoung at redhat.com>
Cc: Juergen Gross <jgross at suse.com>
Cc: Tony Luck <tony.luck at intel.com>
Cc: Yu Chen <yu.c.chen at intel.com>
Cc: Baoquan He <bhe at redhat.com>
Cc: Linus Torvalds <torvalds at linux-foundation.org>
Cc: kexec at lists.infradead.org
Cc: ebiederm at redhat.com
Cc: Borislav Petkov <bp at alien8.de>
Cc: Rui Zhang <rui.zhang at intel.com>
Cc: Arjan van de Ven <arjan at linux.intel.com>
Cc: Boris Ostrovsky <boris.ostrovsky at oracle.com>
Cc: Dan Williams <dan.j.williams at intel.com>
Link: https://lkml.kernel.org/r/20180117234141.21184.44067.stgit@tlendack-t1.amdoffice.net
Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>

---
 arch/x86/kernel/process.c |   25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -380,19 +380,24 @@ void stop_this_cpu(void *dummy)
 	disable_local_APIC();
 	mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
 
+	/*
+	 * Use wbinvd on processors that support SME. This provides support
+	 * for performing a successful kexec when going from SME inactive
+	 * to SME active (or vice-versa). The cache must be cleared so that
+	 * if there are entries with the same physical address, both with and
+	 * without the encryption bit, they don't race each other when flushed
+	 * and potentially end up with the wrong entry being committed to
+	 * memory.
+	 */
+	if (boot_cpu_has(X86_FEATURE_SME))
+		native_wbinvd();
 	for (;;) {
 		/*
-		 * Use wbinvd followed by hlt to stop the processor. This
-		 * provides support for kexec on a processor that supports
-		 * SME. With kexec, going from SME inactive to SME active
-		 * requires clearing cache entries so that addresses without
-		 * the encryption bit set don't corrupt the same physical
-		 * address that has the encryption bit set when caches are
-		 * flushed. To achieve this a wbinvd is performed followed by
-		 * a hlt. Even if the processor is not in the kexec/SME
-		 * scenario this only adds a wbinvd to a halting processor.
+		 * Use native_halt() so that memory contents don't change
+		 * (stack usage and variables) after possibly issuing the
+		 * native_wbinvd() above.
 		 */
-		asm volatile("wbinvd; hlt" : : : "memory");
+		native_halt();
 	}
 }
 





More information about the kexec mailing list