[PATCH] vmap(): don't allow invalid pages

Robin Murphy robin.murphy at arm.com
Wed Jan 19 05:28:14 PST 2022


On 2022-01-18 23:52, Yury Norov wrote:
> vmap() takes struct page *pages as one of arguments, and user may provide
> an invalid pointer which would lead to DABT at address translation later.
> 
> Currently, kernel checks the pages against NULL. In my case, however, the
> address was not NULL, and was big enough so that the hardware generated
> Address Size Abort on arm64.
> 
> Interestingly, this abort happens even if copy_from_kernel_nofault() is
> used, which is quite inconvenient for debugging purposes.
> 
> This patch adds a pfn_valid() check into vmap() path, so that invalid
> mapping will not be created.
> 
> RFC: https://lkml.org/lkml/2022/1/18/815
> v1: use pfn_valid() instead of adding an arch-specific
>      arch_vmap_page_valid(). Thanks to Matthew Wilcox for the hint.
> 
> Signed-off-by: Yury Norov <yury.norov at gmail.com>
> ---
>   mm/vmalloc.c | 2 ++
>   1 file changed, 2 insertions(+)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index d2a00ad4e1dd..a4134ee56b10 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -477,6 +477,8 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
>   			return -EBUSY;
>   		if (WARN_ON(!page))
>   			return -ENOMEM;
> +		if (WARN_ON(!pfn_valid(page_to_pfn(page))))

Is it page_to_pfn() guaranteed to work without blowing up if page is 
invalid in the first place? Looking at the CONFIG_SPARSEMEM case I'm not 
sure that's true...

Robin.

> +			return -EINVAL;
>   		set_pte_at(&init_mm, addr, pte, mk_pte(page, prot));
>   		(*nr)++;
>   	} while (pte++, addr += PAGE_SIZE, addr != end);



More information about the linux-arm-kernel mailing list