[PATCH 2/3] mm: hugetlb: introduce arch_free_vmemmap_page
Muchun Song
songmuchun at bytedance.com
Wed Mar 10 07:15:34 GMT 2021
We register bootmem info for vmemmap pages when boot on x86-64, so
the vmemmap pages must be freed by using free_bootmem_page(). But
on some other architectures, we do not need bootmem info. In this
case, free_reserved_page() is enough to free vmemmap pages.
Currently, only x86-64 need free_bootmem_page(), so introduce a
default arch_free_vmemmap_page() which use free_reserved_page()
to free vmemmap pages directly. On x86-64, we can implement
arch_free_vmemmap_page() to override the default behavior.
Signed-off-by: Muchun Song <songmuchun at bytedance.com>
---
arch/x86/mm/init_64.c | 5 +++++
mm/sparse-vmemmap.c | 9 +++++++--
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 39f88c5faadc..732609dad0ec 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1575,6 +1575,11 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
}
#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
+void arch_free_vmemmap_page(struct page *page)
+{
+ free_bootmem_page(page);
+}
+
void register_page_bootmem_memmap(unsigned long section_nr,
struct page *start_page, unsigned long nr_pages)
{
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 60fc6cd6cd23..76f7b52820e3 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -156,16 +156,21 @@ static void vmemmap_remap_range(unsigned long start, unsigned long end,
flush_tlb_kernel_range(start + PAGE_SIZE, end);
}
+void __weak arch_free_vmemmap_page(struct page *page)
+{
+ free_reserved_page(page);
+}
+
/*
* Free a vmemmap page. A vmemmap page can be allocated from the memblock
* allocator or buddy allocator. If the PG_reserved flag is set, it means
* that it allocated from the memblock allocator, just free it via the
- * free_bootmem_page(). Otherwise, use __free_page().
+ * arch_free_vmemmap_page(). Otherwise, use __free_page().
*/
static inline void free_vmemmap_page(struct page *page)
{
if (PageReserved(page))
- free_bootmem_page(page);
+ arch_free_vmemmap_page(page);
else
__free_page(page);
}
--
2.11.0
More information about the linux-arm-kernel
mailing list