[PATCH 2/2] arm64: mm: make pfn always valid with flat memory
Xishi Qiu
qiuxishi at huawei.com
Mon Apr 11 04:08:38 PDT 2016
On 2016/4/5 16:22, Chen Feng wrote:
> Make the pfn always valid when using flat memory.
> If the reserved memory is not align to memblock-size,
> there will be holes in zone.
>
> This patch makes the memory in buddy always in the
> array of mem-map.
>
> Signed-off-by: Chen Feng <puck.chen at hisilicon.com>
> Signed-off-by: Fu Jun <oliver.fu at hisilicon.com>
> ---
> arch/arm64/mm/init.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index ea989d8..0e1d5b7 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -306,7 +306,8 @@ static void __init free_unused_memmap(void)
How about let free_unused_memmap() support for CONFIG_SPARSEMEM_VMEMMAP?
Thanks,
Xishi Qiu
> struct memblock_region *reg;
>
> for_each_memblock(memory, reg) {
> - start = __phys_to_pfn(reg->base);
> + start = round_down(__phys_to_pfn(reg->base),
> + MAX_ORDER_NR_PAGES);
>
> #ifdef CONFIG_SPARSEMEM
> /*
> @@ -327,8 +328,8 @@ static void __init free_unused_memmap(void)
> * memmap entries are valid from the bank end aligned to
> * MAX_ORDER_NR_PAGES.
> */
> - prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size),
> - MAX_ORDER_NR_PAGES);
> + prev_end = round_up(__phys_to_pfn(reg->base + reg->size),
> + MAX_ORDER_NR_PAGES);
> }
>
> #ifdef CONFIG_SPARSEMEM
More information about the linux-arm-kernel
mailing list