[PATCH v5 5/5] mm: page_alloc: reduce unnecessary binary search in early_pfn_valid()
Ard Biesheuvel
ard.biesheuvel at linaro.org
Mon Apr 2 00:00:37 PDT 2018
On 2 April 2018 at 04:30, Jia He <hejianet at gmail.com> wrote:
> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
> where possible") optimized the loop in memmap_init_zone(). But there is
> still some room for improvement. E.g. in early_pfn_valid(), if pfn and
> pfn+1 are in the same memblock region, we can record the last returned
> memblock region index and check check pfn++ is still in the same region.
>
> Currently it only improve the performance on arm64 and will have no
> impact on other arches.
>
How much does it improve the performance? And in which cases?
I guess it improves boot time on systems with physical address spaces
that are sparsely populated with DRAM, but you really have to quantify
this if you want other people to care.
> Signed-off-by: Jia He <jia.he at hxt-semitech.com>
> ---
> include/linux/mmzone.h | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index f9c0c46..079f468 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1268,9 +1268,14 @@ static inline int pfn_present(unsigned long pfn)
> })
> #else
> #define pfn_to_nid(pfn) (0)
> -#endif
> +#endif /*CONFIG_NUMA*/
>
> +#ifdef CONFIG_HAVE_ARCH_PFN_VALID
> +#define early_pfn_valid(pfn) pfn_valid_region(pfn)
> +#else
> #define early_pfn_valid(pfn) pfn_valid(pfn)
> +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/
> +
> void sparse_init(void);
> #else
> #define sparse_init() do {} while (0)
> --
> 2.7.4
>
More information about the linux-arm-kernel
mailing list