[PATCH v7 02/33] arm64: mm: Avoid swapper block size when choosing vmemmap granularity

Ard Biesheuvel ardb at kernel.org
Fri Nov 11 09:11:30 PST 2022


The logic to decide between PTE and PMD mappings in the vmemmap region
is currently based on the granularity of the initial ID map but those
things have little to do with each other.

The reason we use PMDs here on 4k pagesize kernels is because a struct
page array describing a single section of memory takes up at least the
size described by a PMD, and so mapping down to pages is pointless.

So use the correct conditional, and add a comment to clarify it.

This allows us to remove or rename the swapper block size related
constants in the future.

Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
---
 arch/arm64/mm/mmu.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 757c2fe54d2e99f0..0c35e1f195678695 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1196,7 +1196,12 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 
 	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
 
-	if (!ARM64_KERNEL_USES_PMD_MAPS)
+	/*
+	 * Use page mappings for the vmemmap region if the area taken up by a
+	 * struct page array covering a single section is smaller than the area
+	 * covered by a PMD.
+	 */
+	if (SECTION_SIZE_BITS - VMEMMAP_SHIFT < PMD_SHIFT)
 		return vmemmap_populate_basepages(start, end, node, altmap);
 
 	do {
-- 
2.35.1




More information about the linux-arm-kernel mailing list