[PATCH 1/2] arm64: simplify rules for defining ARM64_MEMSTART_ALIGN
Ard Biesheuvel
ardb at kernel.org
Wed Dec 15 06:52:27 PST 2021
ARM64_MEMSTART_ALIGN defines the minimum alignment of the translation
between virtual and physical addresses, so that data structures dealing
with physical addresses (such as the vmmemmap struct page array) appear
sufficiently aligned in memory.
We currently increase this value artificially to a 'better' value based
on the assumption that being able to use larger block mappings is
preferable, even though we rarely do so now that rodata=full is the
default.
So let's simplify this, and always define ARM64_MEMSTART_ALIGN in terms
of the vmemmap section size.
Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
---
arch/arm64/include/asm/kernel-pgtable.h | 27 +++-----------------
1 file changed, 4 insertions(+), 23 deletions(-)
diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 96dc0f7da258..505ae0d560e6 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -113,30 +113,11 @@
#endif
/*
- * To make optimal use of block mappings when laying out the linear
- * mapping, round down the base of physical memory to a size that can
- * be mapped efficiently, i.e., either PUD_SIZE (4k granule) or PMD_SIZE
- * (64k granule), or a multiple that can be mapped using contiguous bits
- * in the page tables: 32 * PMD_SIZE (16k granule)
+ * The MM code assumes that struct page arrays belonging to a vmmemmap section
+ * appear naturally aligned in memory. This implies that the minimum relative
+ * alignment between virtual and physical addresses in the linear region must
+ * equal the section size.
*/
-#if defined(CONFIG_ARM64_4K_PAGES)
-#define ARM64_MEMSTART_SHIFT PUD_SHIFT
-#elif defined(CONFIG_ARM64_16K_PAGES)
-#define ARM64_MEMSTART_SHIFT CONT_PMD_SHIFT
-#else
-#define ARM64_MEMSTART_SHIFT PMD_SHIFT
-#endif
-
-/*
- * sparsemem vmemmap imposes an additional requirement on the alignment of
- * memstart_addr, due to the fact that the base of the vmemmap region
- * has a direct correspondence, and needs to appear sufficiently aligned
- * in the virtual address space.
- */
-#if ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
#define ARM64_MEMSTART_ALIGN (1UL << SECTION_SIZE_BITS)
-#else
-#define ARM64_MEMSTART_ALIGN (1UL << ARM64_MEMSTART_SHIFT)
-#endif
#endif /* __ASM_KERNEL_PGTABLE_H */
--
2.30.2
More information about the linux-arm-kernel
mailing list