[PATCH v2 2/3] arm64: mm: Optimize range_split_to_ptes()

Ryan Roberts ryan.roberts at arm.com
Thu Nov 6 08:09:42 PST 2025


Enter lazy_mmu mode while splitting a range of memory to pte mappings.
This causes barriers, which would otherwise be emitted after every pte
(and pmd/pud) write, to be deferred until exiting lazy_mmu mode.

For large systems, this is expected to significantly speed up fallback
to pte-mapping the linear map for the case where the boot CPU has
BBML2_NOABORT, but secondary CPUs do not. I haven't directly measured
it, but this is equivalent to commit 1fcb7cea8a5f ("arm64: mm: Batch dsb
and isb when populating pgtables").

Note that for the path from arch_kfence_init_pool(), we may sleep while
allocating memory inside the lazy_mmu mode. Sleeping is not allowed by
generic code inside lazy_mmu, but we know that the arm64 implementation
is sleep-safe. So this is ok and follows the same pattern already used
by split_kernel_leaf_mapping().

Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>
---
 arch/arm64/mm/mmu.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index a364ac2c9c61..652bb8c14035 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -832,8 +832,14 @@ static const struct mm_walk_ops split_to_ptes_ops = {
 
 static int range_split_to_ptes(unsigned long start, unsigned long end, gfp_t gfp)
 {
-	return walk_kernel_page_table_range_lockless(start, end,
+	int ret;
+
+	arch_enter_lazy_mmu_mode();
+	ret = walk_kernel_page_table_range_lockless(start, end,
 					&split_to_ptes_ops, NULL, &gfp);
+	arch_leave_lazy_mmu_mode();
+
+	return ret;
 }
 
 static bool linear_map_requires_bbml2 __initdata;
-- 
2.43.0




More information about the linux-arm-kernel mailing list