[PATCH] mm: vmalloc: use VMALLOC_EARLY_START boundary for early vmap area

Dev Jain dev.jain at arm.com
Mon Jul 21 23:48:11 PDT 2025


On 22/07/25 9:38 am, Jia He wrote:
> When VMALLOC_START is redefined to a new boundary, most subsystems
> continue to function correctly. However, vm_area_register_early()
> assumes the use of the global _vmlist_ structure before vmalloc_init()
> is invoked. This assumption can lead to issues during early boot.
>
> See the calltrace as follows:
> 	start_kernel()
> 		setup_per_cpu_areas()
> 			pcpu_page_first_chunk()
> 				vm_area_register_early()
> 		mm_core_init()
> 			vmalloc_init()
>
> The early vm areas will be added to vmlist at declare_kernel_vmas()
> ->declare_vma():
> ffff800080010000 T _stext
> ffff800080da0000 D __start_rodata
> ffff800081890000 T __inittext_begin
> ffff800081980000 D __initdata_begin
> ffff800081ee0000 D _data
> The starting address of the early areas is tied to the *old* VMALLOC_START
> (i.e. 0xffff800080000000 on an arm64 N2 server).
>
> If VMALLOC_START is redefined, it can disrupt early VM area allocation,
> particularly in like pcpu_page_first_chunk()->vm_area_register_early().
>
> To address this potential risk on arm64, introduce a new boundary,
> VMALLOC_EARLY_START, to avoid boot issues when VMALLOC_START is
> occasionaly redefined.

Sorry but I am unable to understand the point of the patch. If a particular
value of VMALLOC_START causes a problem because the vma declarations of the
kernel are tied to that value, surely we should be reasoning about what was
wrong about the new value, and not circumventing the actual problem
by introducing VMALLOC_EARLY_START?

Also by your patch description I don't think you have run into a reproducible
boot issue, so this patch is basically adding dead code because both macros
are defined to MODULES_END?


>
> Signed-off-by: Jia He <justin.he at arm.com>
> ---
>   arch/arm64/include/asm/pgtable.h | 2 ++
>   mm/vmalloc.c                     | 6 +++++-
>   2 files changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 192d86e1cc76..91031912a906 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -18,9 +18,11 @@
>    * VMALLOC range.
>    *
>    * VMALLOC_START: beginning of the kernel vmalloc space
> + * VMALLOC_EARLY_START: early vm area before vmalloc_init()
>    * VMALLOC_END: extends to the available space below vmemmap
>    */
>   #define VMALLOC_START		(MODULES_END)
> +#define VMALLOC_EARLY_START	(MODULES_END)
>   #if VA_BITS == VA_BITS_MIN
>   #define VMALLOC_END		(VMEMMAP_START - SZ_8M)
>   #else
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 6dbcdceecae1..86ab1e99641a 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -50,6 +50,10 @@
>   #include "internal.h"
>   #include "pgalloc-track.h"
>   
> +#ifndef VMALLOC_EARLY_START
> +#define VMALLOC_EARLY_START		VMALLOC_START
> +#endif
> +
>   #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
>   static unsigned int __ro_after_init ioremap_max_page_shift = BITS_PER_LONG - 1;
>   
> @@ -3126,7 +3130,7 @@ void __init vm_area_add_early(struct vm_struct *vm)
>    */
>   void __init vm_area_register_early(struct vm_struct *vm, size_t align)
>   {
> -	unsigned long addr = ALIGN(VMALLOC_START, align);
> +	unsigned long addr = ALIGN(VMALLOC_EARLY_START, align);
>   	struct vm_struct *cur, **p;
>   
>   	BUG_ON(vmap_initialized);



More information about the linux-arm-kernel mailing list