[PATCH v2 1/3] vmalloc: Choose a better start address in vm_area_register_early()

Catalin Marinas catalin.marinas at arm.com
Sun Aug 1 08:23:12 PDT 2021


On Tue, Jul 20, 2021 at 10:51:03AM +0800, Kefeng Wang wrote:
> There are some fixed locations in the vmalloc area be reserved
> in ARM(see iotable_init()) and ARM64(see map_kernel()), but for
> pcpu_page_first_chunk(), it calls vm_area_register_early() and
> choose VMALLOC_START as the start address of vmap area which
> could be conflicted with above address, then could trigger a
> BUG_ON in vm_area_add_early().
> 
> Let's choose the end of existing address range in vmlist as the
> start address instead of VMALLOC_START to avoid the BUG_ON.
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang at huawei.com>
> ---
>  mm/vmalloc.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index d5cd52805149..a98cf97f032f 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2238,12 +2238,14 @@ void __init vm_area_add_early(struct vm_struct *vm)
>   */
>  void __init vm_area_register_early(struct vm_struct *vm, size_t align)
>  {
> -	static size_t vm_init_off __initdata;
> +	unsigned long vm_start = VMALLOC_START;
> +	struct vm_struct *tmp;
>  	unsigned long addr;
>  
> -	addr = ALIGN(VMALLOC_START + vm_init_off, align);
> -	vm_init_off = PFN_ALIGN(addr + vm->size) - VMALLOC_START;
> +	for (tmp = vmlist; tmp; tmp = tmp->next)
> +		vm_start = (unsigned long)tmp->addr + tmp->size;
>  
> +	addr = ALIGN(vm_start, align);
>  	vm->addr = (void *)addr;
>  
>  	vm_area_add_early(vm);

Is there a risk of breaking other architectures? It doesn't look like to
me but I thought I'd ask.

Also, instead of always picking the end, could we search for a range
that fits?

-- 
Catalin



More information about the linux-arm-kernel mailing list