[PATCH v1 0/4] arm64: drop pfn_valid_within() and simplify pfn_valid()
Mike Rapoport
rppt at kernel.org
Tue Apr 20 10:09:21 BST 2021
From: Mike Rapoport <rppt at linux.ibm.com>
Hi,
These patches aim to remove CONFIG_HOLES_IN_ZONE and essentially hardwire
pfn_valid_within() to 1.
The idea is to mark NOMAP pages as reserved in the memory map and restore
the intended semantics of pfn_valid() to designate availability of struct
page for a pfn.
With this the core mm will be able to cope with the fact that it cannot use
NOMAP pages and the holes created by NOMAP ranges within MAX_ORDER blocks
will be treated correctly even without the need for pfn_valid_within.
The patches are only boot tested on qemu-system-aarch64 so I'd really
appreciate memory stress tests on real hardware.
If this actually works we'll be one step closer to drop custom pfn_valid()
on arm64 altogether.
Changes since RFC
Link: https://lore.kernel.org/lkml/20210407172607.8812-1-rppt@kernel.org
* Add comment about the semantics of pfn_valid() as Anshuman suggested
* Extend comments about MEMBLOCK_NOMAP, per Anshuman
* Use pfn_is_map_memory() name for the exported wrapper for
memblock_is_map_memory(). It is still local to arch/arm64 in the end
because of header dependency issues.
Mike Rapoport (4):
include/linux/mmzone.h: add documentation for pfn_valid()
memblock: update initialization of reserved pages
arm64: decouple check whether pfn is in linear map from pfn_valid()
arm64: drop pfn_valid_within() and simplify pfn_valid()
arch/arm64/Kconfig | 3 ---
arch/arm64/include/asm/memory.h | 2 +-
arch/arm64/include/asm/page.h | 1 +
arch/arm64/kvm/mmu.c | 2 +-
arch/arm64/mm/init.c | 10 ++++++++--
arch/arm64/mm/ioremap.c | 4 ++--
arch/arm64/mm/mmu.c | 2 +-
include/linux/memblock.h | 4 +++-
include/linux/mmzone.h | 11 +++++++++++
mm/memblock.c | 28 ++++++++++++++++++++++++++--
10 files changed, 54 insertions(+), 13 deletions(-)
base-commit: e49d033bddf5b565044e2abe4241353959bc9120
--
2.28.0
More information about the linux-arm-kernel
mailing list