[PATCH v4 05/13] ARM: mmu: skip TLB invalidation if remapping zero bytes

Ahmad Fatoum a.fatoum at barebox.org
Mon Aug 4 10:22:25 PDT 2025


From: Ahmad Fatoum <a.fatoum at pengutronix.de>

The loop that remaps memory banks can end up calling remap_range with
zero size, when a reserved region is at the very start of the memory
bank.

This is handled correctly by the code, but does an unnecessary
invalidation of the whole TLB. Let's early exit instead to skip that.

Signed-off-by: Ahmad Fatoum <a.fatoum at pengutronix.de>
Signed-off-by: Ahmad Fatoum <a.fatoum at barebox.org>
---
 arch/arm/cpu/mmu_32.c | 2 ++
 arch/arm/cpu/mmu_64.c | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index b21fc75f0ceb..80e302596890 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -261,6 +261,8 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
 	pmd_flags = pte_flags_to_pmd(pte_flags);
 
 	size = PAGE_ALIGN(size);
+	if (!size)
+		return;
 
 	while (size) {
 		const bool pgdir_size_aligned = IS_ALIGNED(virt_addr, PGDIR_SIZE);
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 121dd136af33..db312daafdd2 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -145,6 +145,8 @@ static void create_sections(uint64_t virt, uint64_t phys, uint64_t size,
 	attr &= ~PTE_TYPE_MASK;
 
 	size = PAGE_ALIGN(size);
+	if (!size)
+		return;
 
 	while (size) {
 		table = ttb;
-- 
2.39.5




More information about the barebox mailing list