[v6 04/15] mm: discard memblock data later

Michal Hocko mhocko at kernel.org
Fri Aug 11 02:32:49 PDT 2017


[CC Mel]

On Mon 07-08-17 16:38:38, Pavel Tatashin wrote:
> There is existing use after free bug when deferred struct pages are
> enabled:
> 
> The memblock_add() allocates memory for the memory array if more than
> 128 entries are needed.  See comment in e820__memblock_setup():
> 
>   * The bootstrap memblock region count maximum is 128 entries
>   * (INIT_MEMBLOCK_REGIONS), but EFI might pass us more E820 entries
>   * than that - so allow memblock resizing.
> 
> This memblock memory is freed here:
>         free_low_memory_core_early()
> 
> We access the freed memblock.memory later in boot when deferred pages are
> initialized in this path:
> 
>         deferred_init_memmap()
>                 for_each_mem_pfn_range()
>                   __next_mem_pfn_range()
>                     type = &memblock.memory;

Yes you seem to be right.
>
> One possible explanation for why this use-after-free hasn't been hit
> before is that the limit of INIT_MEMBLOCK_REGIONS has never been exceeded
> at least on systems where deferred struct pages were enabled.

Yeah this sounds like the case.
 
> Another reason why we want this problem fixed in this patch series is,
> in the next patch, we will need to access memblock.reserved from
> deferred_init_memmap().
> 

I guess this goes all the way down to 
Fixes: 7e18adb4f80b ("mm: meminit: initialise remaining struct pages in parallel with kswapd")
> Signed-off-by: Pavel Tatashin <pasha.tatashin at oracle.com>
> Reviewed-by: Steven Sistare <steven.sistare at oracle.com>
> Reviewed-by: Daniel Jordan <daniel.m.jordan at oracle.com>
> Reviewed-by: Bob Picco <bob.picco at oracle.com>

Considering that some HW might behave strangely and this would be rather
hard to debug I would be tempted to mark this for stable. It should also
be merged separately from the rest of the series.

I have just one nit below
Acked-by: Michal Hocko <mhocko at suse.com>

[...]
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 2cb25fe4452c..bf14aea6ab70 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -285,31 +285,27 @@ static void __init_memblock memblock_remove_region(struct memblock_type *type, u
>  }
>  
>  #ifdef CONFIG_ARCH_DISCARD_MEMBLOCK

pull this ifdef inside memblock_discard and you do not have an another
one in page_alloc_init_late

[...]
> +/**
> + * Discard memory and reserved arrays if they were allocated
> + */
> +void __init memblock_discard(void)
>  {

here

> -	if (memblock.memory.regions == memblock_memory_init_regions)
> -		return 0;
> +	phys_addr_t addr, size;
>  
> -	*addr = __pa(memblock.memory.regions);
> +	if (memblock.reserved.regions != memblock_reserved_init_regions) {
> +		addr = __pa(memblock.reserved.regions);
> +		size = PAGE_ALIGN(sizeof(struct memblock_region) *
> +				  memblock.reserved.max);
> +		__memblock_free_late(addr, size);
> +	}
>  
> -	return PAGE_ALIGN(sizeof(struct memblock_region) *
> -			  memblock.memory.max);
> +	if (memblock.memory.regions == memblock_memory_init_regions) {
> +		addr = __pa(memblock.memory.regions);
> +		size = PAGE_ALIGN(sizeof(struct memblock_region) *
> +				  memblock.memory.max);
> +		__memblock_free_late(addr, size);
> +	}
>  }
> -
>  #endif
[...]
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index fc32aa81f359..63d16c185736 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1584,6 +1584,10 @@ void __init page_alloc_init_late(void)
>  	/* Reinit limits that are based on free pages after the kernel is up */
>  	files_maxfiles_init();
>  #endif
> +#ifdef CONFIG_ARCH_DISCARD_MEMBLOCK
> +	/* Discard memblock private memory */
> +	memblock_discard();
> +#endif
>  
>  	for_each_populated_zone(zone)
>  		set_zone_contiguous(zone);
> -- 
> 2.14.0

-- 
Michal Hocko
SUSE Labs



More information about the linux-arm-kernel mailing list