[PATCH v1] arm64/mm: Ensure lazy_mmu_mode never nests

Ryan Roberts ryan.roberts at arm.com
Fri Jun 6 06:56:52 PDT 2025


Commit 1ef3095b1405 ("arm64/mm: Permit lazy_mmu_mode to be nested")
provided a quick fix to ensure that lazy_mmu_mode continues to work when
CONFIG_DEBUG_PAGEALLOC is enabled, which can cause lazy_mmu_mode to
nest.

The solution in that patch is the make the implementation tolerant to
nesting; when the inner nest exits lazy_mmu_mode, we exit then the outer
exit becomes a nop. But this sacrifices the optimization opportunity for
the remainder of the outer user.

So let's take a different approach and simply ensure the nesting never
happens in the first place. The nesting is caused when the page
allocator calls out to __kernel_map_pages() which then eventually calls
apply_to_page_range(), which calls arch_enter_lazy_mmu_mode(). So simply
notice if we are in lazy_mmu_mode in __kernel_map_pages() and
temporarily exit.

With that approach, we can effectively revert Commit 1ef3095b1405
("arm64/mm: Permit lazy_mmu_mode to be nested"), re-enabling the VM_WARN
if we ever detect nesting in future.

Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>
---

I wonder if you might be willing to take this for v6.16? I think its a neater
solution then my first attempt - Commit 1ef3095b1405 ("arm64/mm: Permit
lazy_mmu_mode to be nested") - which is already in Linus's master.

To be clear, the current solution is safe, I just think this is much neater.

Applies on today's master branch (e271ed52b344).

Thanks,
Ryan

 arch/arm64/include/asm/pgtable.h | 22 ++++++++++------------
 arch/arm64/mm/pageattr.c         | 23 +++++++++++++++++------
 2 files changed, 27 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 88db8a0c0b37..9f387337ccc3 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -83,21 +83,11 @@ static inline void queue_pte_barriers(void)
 #define  __HAVE_ARCH_ENTER_LAZY_MMU_MODE
 static inline void arch_enter_lazy_mmu_mode(void)
 {
-	/*
-	 * lazy_mmu_mode is not supposed to permit nesting. But in practice this
-	 * does happen with CONFIG_DEBUG_PAGEALLOC, where a page allocation
-	 * inside a lazy_mmu_mode section (such as zap_pte_range()) will change
-	 * permissions on the linear map with apply_to_page_range(), which
-	 * re-enters lazy_mmu_mode. So we tolerate nesting in our
-	 * implementation. The first call to arch_leave_lazy_mmu_mode() will
-	 * flush and clear the flag such that the remainder of the work in the
-	 * outer nest behaves as if outside of lazy mmu mode. This is safe and
-	 * keeps tracking simple.
-	 */
-
 	if (in_interrupt())
 		return;

+	VM_WARN_ON(test_thread_flag(TIF_LAZY_MMU));
+
 	set_thread_flag(TIF_LAZY_MMU);
 }

@@ -119,6 +109,14 @@ static inline void arch_leave_lazy_mmu_mode(void)
 	clear_thread_flag(TIF_LAZY_MMU);
 }

+static inline bool arch_in_lazy_mmu_mode(void)
+{
+	if (in_interrupt())
+		return false;
+
+	return test_thread_flag(TIF_LAZY_MMU);
+}
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE

diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 04d4a8f676db..4da7a847d5f3 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -293,18 +293,29 @@ int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
 }

 #ifdef CONFIG_DEBUG_PAGEALLOC
-/*
- * This is - apart from the return value - doing the same
- * thing as the new set_direct_map_valid_noflush() function.
- *
- * Unify? Explain the conceptual differences?
- */
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
+	bool lazy_mmu;
+
 	if (!can_set_direct_map())
 		return;

+	/*
+	 * This is called during page alloc or free, and maybe called while in
+	 * lazy mmu mode. Since set_memory_valid() may also enter lazy mmu mode,
+	 * this would cause nesting which is not supported; the inner call to
+	 * exit the mode would exit, meaning that the outer lazy mmu mode is no
+	 * longer benefiting from the optimization. So temporarily leave lazy
+	 * mmu mode for the duration of the call.
+	 */
+	lazy_mmu = arch_in_lazy_mmu_mode();
+	if (lazy_mmu)
+		arch_leave_lazy_mmu_mode();
+
 	set_memory_valid((unsigned long)page_address(page), numpages, enable);
+
+	if (lazy_mmu)
+		arch_enter_lazy_mmu_mode();
 }
 #endif /* CONFIG_DEBUG_PAGEALLOC */

--
2.43.0




More information about the linux-arm-kernel mailing list