[PATCH] arm64: kexec: load from kimage prior to clobbering

Pasha Tatashin pasha.tatashin at soleen.com
Mon May 16 12:59:43 PDT 2022


On Mon, May 16, 2022 at 12:07 PM Mark Rutland <mark.rutland at arm.com> wrote:
>
> In arm64_relocate_new_kernel() we load some fields out of the kimage
> structure after relocation has occurred. As the kimage structure isn't
> allocate to be relocation-safe, it may be clobbered during relocation,
> and we may load junk values out of the structure.
>
> Due to this, kexec may fail when the kimage allocation happens to fall
> within a PA range that an object will be relocated to. This has been
> observed to occur for regular kexec on a QEMU TCG 'virt' machine with
> 2GiB of RAM, where the PA range of the new kernel image overlaps the
> kimage structure.
>
> Avoid this by ensuring we load all values from the kimage structure
> prior to relocation.
>
> I've tested this atop v5.16 and v5.18-rc6.

LGTM, thanks.
Reviewed-by: Pasha Tatashin <pasha.tatashin at soleen.com>

>
> Fixes: 878fdbd704864352 ("arm64: kexec: pass kimage as the only argument to relocation function")
> Signed-off-by: Mark Rutland <mark.rutland at arm.com>
> Cc: Catalin Marinas <catalin.marinas at arm.com>
> Cc: James Morse <james.morse at arm.com>
> Cc: Pasha Tatashin <pasha.tatashin at soleen.com>
> Cc: Will Deacon <will at kernel.org>
> ---
>  arch/arm64/kernel/relocate_kernel.S | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S
> index f0a3df9e18a32..413f899e4ac63 100644
> --- a/arch/arm64/kernel/relocate_kernel.S
> +++ b/arch/arm64/kernel/relocate_kernel.S
> @@ -37,6 +37,15 @@
>   * safe memory that has been set up to be preserved during the copy operation.
>   */
>  SYM_CODE_START(arm64_relocate_new_kernel)
> +       /*
> +        * The kimage structure isn't allocated specially and may be clobbered
> +        * during relocation. We must load any values we need from it prior to
> +        * any relocation occurring.
> +        */
> +       ldr     x28, [x0, #KIMAGE_START]
> +       ldr     x27, [x0, #KIMAGE_ARCH_EL2_VECTORS]
> +       ldr     x26, [x0, #KIMAGE_ARCH_DTB_MEM]
> +
>         /* Setup the list loop variables. */
>         ldr     x18, [x0, #KIMAGE_ARCH_ZERO_PAGE] /* x18 = zero page for BBM */
>         ldr     x17, [x0, #KIMAGE_ARCH_TTBR1]   /* x17 = linear map copy */
> @@ -72,21 +81,20 @@ SYM_CODE_START(arm64_relocate_new_kernel)
>         ic      iallu
>         dsb     nsh
>         isb
> -       ldr     x4, [x0, #KIMAGE_START]                 /* relocation start */
> -       ldr     x1, [x0, #KIMAGE_ARCH_EL2_VECTORS]      /* relocation start */
> -       ldr     x0, [x0, #KIMAGE_ARCH_DTB_MEM]          /* dtb address */
>         turn_off_mmu x12, x13
>
>         /* Start new image. */
> -       cbz     x1, .Lel1
> -       mov     x1, x4                          /* relocation start */
> -       mov     x2, x0                          /* dtb address */
> +       cbz     x27, .Lel1
> +       mov     x1, x28                         /* kernel entry point */
> +       mov     x2, x26                         /* dtb address */
>         mov     x3, xzr
>         mov     x4, xzr
>         mov     x0, #HVC_SOFT_RESTART
>         hvc     #0                              /* Jumps from el2 */
>  .Lel1:
> +       mov     x0, x26                         /* dtb address */
> +       mov     x1, xzr
>         mov     x2, xzr
>         mov     x3, xzr
> -       br      x4                              /* Jumps from el1 */
> +       br      x28                             /* Jumps from el1 */
>  SYM_CODE_END(arm64_relocate_new_kernel)
> --
> 2.30.2
>



More information about the linux-arm-kernel mailing list