[PATCH v3 6/8] arm64/efi: Use a mutex to protect the EFI stack and FP/SIMD state

Ard Biesheuvel ardb+git at google.com
Thu Sep 18 03:30:17 PDT 2025


From: Ard Biesheuvel <ardb at kernel.org>

Replace the spinlock in the arm64 glue code with a mutex, so that
the CPU can preempted while running the EFI runtime service.

Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
---
 arch/arm64/kernel/efi.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index 0d52414415f3..4372fafde8e9 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -166,15 +166,22 @@ asmlinkage efi_status_t efi_handle_corrupted_x18(efi_status_t s, const char *f)
 	return s;
 }
 
-static DEFINE_RAW_SPINLOCK(efi_rt_lock);
+static DEFINE_MUTEX(efi_rt_lock);
 
 bool arch_efi_call_virt_setup(void)
 {
 	if (!may_use_simd())
 		return false;
 
+	/*
+	 * This might be called from a non-sleepable context so try to take the
+	 * lock but don't block on it. This should never fail in practice, as
+	 * all EFI runtime calls are serialized under the efi_runtime_lock.
+	 */
+	if (WARN_ON(!mutex_trylock(&efi_rt_lock)))
+		return false;
+
 	efi_virtmap_load();
-	raw_spin_lock(&efi_rt_lock);
 	kernel_neon_begin();
 	return true;
 }
@@ -182,8 +189,8 @@ bool arch_efi_call_virt_setup(void)
 void arch_efi_call_virt_teardown(void)
 {
 	kernel_neon_end();
-	raw_spin_unlock(&efi_rt_lock);
 	efi_virtmap_unload();
+	mutex_unlock(&efi_rt_lock);
 }
 
 asmlinkage u64 *efi_rt_stack_top __ro_after_init;
-- 
2.51.0.384.g4c02a37b29-goog




More information about the linux-arm-kernel mailing list