[PATCH v3 7/7] mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range()
David Woodhouse
dwmw2 at infradead.org
Wed Apr 23 00:52:49 PDT 2025
From: David Woodhouse <dwmw at amazon.co.uk>
Currently, memmap_init initializes pfn_hole with 0 instead of
ARCH_PFN_OFFSET. Then init_unavailable_range will start iterating each
page from the page at address zero to the first available page, but it
won't do anything for pages below ARCH_PFN_OFFSET because pfn_valid
won't pass.
If ARCH_PFN_OFFSET is very large (e.g., something like 2^64-2GiB if the
kernel is used as a library and loaded at a very high address), the
pointless iteration for pages below ARCH_PFN_OFFSET will take a very
long time, and the kernel will look stuck at boot time.
Use for_each_valid_pfn() to skip the pointless iterations.
Reported-by: Ruihan Li <lrh2000 at pku.edu.cn>
Suggested-by: Mike Rapoport <rppt at kernel.org>
Signed-off-by: David Woodhouse <dwmw at amazon.co.uk>
---
mm/mm_init.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 41884f2155c4..0d1a4546825c 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -845,11 +845,7 @@ static void __init init_unavailable_range(unsigned long spfn,
unsigned long pfn;
u64 pgcnt = 0;
- for (pfn = spfn; pfn < epfn; pfn++) {
- if (!pfn_valid(pageblock_start_pfn(pfn))) {
- pfn = pageblock_end_pfn(pfn) - 1;
- continue;
- }
+ for_each_valid_pfn(pfn, spfn, epfn) {
__init_single_page(pfn_to_page(pfn), pfn, zone, node);
__SetPageReserved(pfn_to_page(pfn));
pgcnt++;
--
2.49.0
More information about the linux-arm-kernel
mailing list