[PATCH] arm64: head: avoid over-mapping in map_memory

Mark Rutland mark.rutland at arm.com
Tue Aug 10 08:27:56 PDT 2021


The `compute_indices` and `populate_entries` macros operate on inclusive
bounds, and thus the `map_memory` macro which uses them also operates
on inclusive bounds.

We pass `_end` and `_idmap_text_end` to `map_memory`, but these are
exclusive bounds, and if one of these is sufficiently aligned (as a
result of kernel configuration, physical placement, and KASLR), then:

* In `compute_indices`, the computed `iend` will be in the page/block *after*
  the final byte of the intended mapping.

* In `populate_entries`, an unnecessary entry will be created at the end
  of each level of table. At the leaf level, this entry will map up to
  SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend
  to map.

As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may
violate the boot protocol and map physical address past the 2MiB-aligned
end address we are permitted to map. As we map these with Normal memory
attributes, this may result in further problems depending on what these
physical addresses correspond to.

Fix this by subtracting one from the end address in both cases, such
that we always use inclusive bounds. For clarity, comments are updated
to more clearly document that the macros expect inclusive bounds.

Fixes: 0370b31e48454d8c ("arm64: Extend early page table code to allow for larger kernel")
Signed-off-by: Mark Rutland <mark.rutland at arm.com>
Cc: Anshuman Khandual <anshuman.khandual at arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel at linaro.org>
Cc: Catalin Marinas <catalin.marinas at arm.com>
Cc: Steve Capper <steve.capper at arm.com>
Cc: Will Deacon <will at kernel.org>
---
 arch/arm64/kernel/head.S | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

I spotted this while working on some rework of the early page table code.
While the rest isn't ready yet, I thought I'd send this out on its own as it's
a fix.

Mark.

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index c5c994a73a64..f0826be4c104 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -176,8 +176,8 @@ SYM_CODE_END(preserve_boot_args)
  * were needed in the previous page table level then the next page table level is assumed
  * to be composed of multiple pages. (This effectively scales the end index).
  *
- *	vstart:	virtual address of start of range
- *	vend:	virtual address of end of range
+ *	vstart:	virtual address of start of range (inclusive)
+ *	vend:	virtual address of end of range (inclusive)
  *	shift:	shift used to transform virtual address into index
  *	ptrs:	number of entries in page table
  *	istart:	index in table corresponding to vstart
@@ -214,8 +214,8 @@ SYM_CODE_END(preserve_boot_args)
  *
  *	tbl:	location of page table
  *	rtbl:	address to be used for first level page table entry (typically tbl + PAGE_SIZE)
- *	vstart:	start address to map
- *	vend:	end address to map - we map [vstart, vend]
+ *	vstart:	virtual address of start of mapping (inclusive)
+ *	vend:	virtual address of end of mapping (inclusive)
  *	flags:	flags to use to map last level entries
  *	phys:	physical address corresponding to vstart - physical memory is contiguous
  *	pgds:	the number of pgd entries
@@ -355,6 +355,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 1:
 	ldr_l	x4, idmap_ptrs_per_pgd
 	adr_l	x6, __idmap_text_end		// __pa(__idmap_text_end)
+	sub	x6, x6, #1
 
 	map_memory x0, x1, x3, x6, x7, x3, x4, x10, x11, x12, x13, x14
 
@@ -366,6 +367,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 	add	x5, x5, x23			// add KASLR displacement
 	mov	x4, PTRS_PER_PGD
 	adrp	x6, _end			// runtime __pa(_end)
+	sub	x6, x6, #1
 	adrp	x3, _text			// runtime __pa(_text)
 	sub	x6, x6, x3			// _end - _text
 	add	x6, x6, x5			// runtime __va(_end)
-- 
2.11.0




More information about the linux-arm-kernel mailing list