[PATCH] arm64: kasan: Fix zero shadow mapping overriding kernel image shadow

Ard Biesheuvel ard.biesheuvel at linaro.org
Thu Mar 10 18:25:43 PST 2016


On 11 March 2016 at 01:57, Catalin Marinas <catalin.marinas at arm.com> wrote:
> With the 16KB and 64KB page size configurations, SWAPPER_BLOCK_SIZE is
> PAGE_SIZE and ARM64_SWAPPER_USES_SECTION_MAPS is 0. Since
> kimg_shadow_end is not page aligned (_end shifted by
> KASAN_SHADOW_SCALE_SHIFT), the edges of previously mapped kernel image
> shadow via vmemmap_populate() may be overridden by the subsequent call
> to kasan_populate_zero_shadow(), leading to kernel panics like below:
>
> ------------------------------------------------------------------------------
> Unable to handle kernel paging request at virtual address fffffc100135068c
> pgd = fffffc8009ac0000
> [fffffc100135068c] *pgd=00000009ffee0003, *pud=00000009ffee0003, *pmd=00000009ffee0003, *pte=00e0000081a00793
> Internal error: Oops: 9600004f [#1] PREEMPT SMP
> Modules linked in:
> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.5.0-rc4+ #1984
> Hardware name: Juno (DT)
> task: fffffe09001a0000 ti: fffffe0900200000 task.ti: fffffe0900200000
> PC is at __memset+0x4c/0x200
> LR is at kasan_unpoison_shadow+0x34/0x50
> pc : [<fffffc800846f1cc>] lr : [<fffffc800821ff54>] pstate: 00000245
> sp : fffffe0900203db0
> x29: fffffe0900203db0 x28: 0000000000000000
> x27: 0000000000000000 x26: 0000000000000000
> x25: fffffc80099b69d0 x24: 0000000000000001
> x23: 0000000000000000 x22: 0000000000002000
> x21: dffffc8000000000 x20: 1fffff9001350a8c
> x19: 0000000000002000 x18: 0000000000000008
> x17: 0000000000000147 x16: ffffffffffffffff
> x15: 79746972100e041d x14: ffffff0000000000
> x13: ffff000000000000 x12: 0000000000000000
> x11: 0101010101010101 x10: 1fffffc11c000000
> x9 : 0000000000000000 x8 : fffffc100135068c
> x7 : 0000000000000000 x6 : 000000000000003f
> x5 : 0000000000000040 x4 : 0000000000000004
> x3 : fffffc100134f651 x2 : 0000000000000400
> x1 : 0000000000000000 x0 : fffffc100135068c
>
> Process swapper/0 (pid: 1, stack limit = 0xfffffe0900200020)
> Call trace:
> [<fffffc800846f1cc>] __memset+0x4c/0x200
> [<fffffc8008220044>] __asan_register_globals+0x5c/0xb0
> [<fffffc8008a09d34>] _GLOBAL__sub_I_65535_1_sunrpc_cache_lookup+0x1c/0x28
> [<fffffc8008f20d28>] kernel_init_freeable+0x104/0x274
> [<fffffc80089e1948>] kernel_init+0x10/0xf8
> [<fffffc8008093a00>] ret_from_fork+0x10/0x50
> ------------------------------------------------------------------------------
>
> This patch aligns kimg_shadow_start and kimg_shadow_end to
> SWAPPER_BLOCK_SIZE in all configurations.
>
> Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area")
> Signed-off-by: Catalin Marinas <catalin.marinas at arm.com>

Acked-by: Ard Biesheuvel <ard.biesheuvel at linaro.org>

> ---
>  arch/arm64/mm/kasan_init.c | 13 +++++--------
>  1 file changed, 5 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index a164183f3481..757009daa9ed 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -159,15 +159,12 @@ void __init kasan_init(void)
>          * vmemmap_populate() has populated the shadow region that covers the
>          * kernel image with SWAPPER_BLOCK_SIZE mappings, so we have to round
>          * the start and end addresses to SWAPPER_BLOCK_SIZE as well, to prevent
> -        * kasan_populate_zero_shadow() from replacing the PMD block mappings
> -        * with PMD table mappings at the edges of the shadow region for the
> -        * kernel image.
> +        * kasan_populate_zero_shadow() from replacing the page table entries
> +        * (PMD or PTE) at the edges of the shadow region for the kernel
> +        * image.
>          */
> -       if (ARM64_SWAPPER_USES_SECTION_MAPS) {
> -               kimg_shadow_start = round_down(kimg_shadow_start,
> -                                              SWAPPER_BLOCK_SIZE);
> -               kimg_shadow_end = round_up(kimg_shadow_end, SWAPPER_BLOCK_SIZE);
> -       }
> +       kimg_shadow_start = round_down(kimg_shadow_start, SWAPPER_BLOCK_SIZE);
> +       kimg_shadow_end = round_up(kimg_shadow_end, SWAPPER_BLOCK_SIZE);
>
>         kasan_populate_zero_shadow((void *)KASAN_SHADOW_START,
>                                    (void *)mod_shadow_start);



More information about the linux-arm-kernel mailing list