[PATCH v6 02/41] arm64: mm: Take potential load offset into account when KASLR is off
Anshuman Khandual
anshuman.khandual at arm.com
Wed Nov 29 21:23:28 PST 2023
On 11/29/23 16:45, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb at kernel.org>
>
> We enable CONFIG_RELOCATABLE even when CONFIG_RANDOMIZE_BASE is
> disabled, and this permits the loader (i.e., EFI) to place the kernel
Indeed, could validate this out via defconfig based build and boot.
> anywhere in physical memory as long as the base address is 64k aligned.
>
> This means that the 'KASLR' case described in the header that defines
> the size of the statically allocated page tables could take effect even
> when CONFIG_RANDMIZE_BASE=n. So check for CONFIG_RELOCATABLE instead.
Makes sense.
>
> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> ---
> arch/arm64/include/asm/kernel-pgtable.h | 27 +++++---------------
> 1 file changed, 6 insertions(+), 21 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
> index 85d26143faa5..83ddb14b95a5 100644
> --- a/arch/arm64/include/asm/kernel-pgtable.h
> +++ b/arch/arm64/include/asm/kernel-pgtable.h
> @@ -37,27 +37,12 @@
>
>
> /*
> - * If KASLR is enabled, then an offset K is added to the kernel address
> - * space. The bottom 21 bits of this offset are zero to guarantee 2MB
> - * alignment for PA and VA.
> - *
> - * For each pagetable level of the swapper, we know that the shift will
> - * be larger than 21 (for the 4KB granule case we use section maps thus
> - * the smallest shift is actually 30) thus there is the possibility that
> - * KASLR can increase the number of pagetable entries by 1, so we make
> - * room for this extra entry.
> - *
> - * Note KASLR cannot increase the number of required entries for a level
> - * by more than one because it increments both the virtual start and end
> - * addresses equally (the extra entry comes from the case where the end
> - * address is just pushed over a boundary and the start address isn't).
> + * A relocatable kernel may execute from an address that differs from the one at
> + * which it was linked. In the worst case, its runtime placement may intersect
> + * with two adjacent PGDIR entries, which means that an additional page table
> + * may be needed at each subordinate level.
> */
This is a better explanation.
> -
> -#ifdef CONFIG_RANDOMIZE_BASE
> -#define EARLY_KASLR (1)
> -#else
> -#define EARLY_KASLR (0)
> -#endif
> +#define EXTRA_PAGE __is_defined(CONFIG_RELOCATABLE)
EXTRA_INIT_DIR_PAGE instead ? Just to have some more context.
>
> #define SPAN_NR_ENTRIES(vstart, vend, shift) \
> ((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1)
> @@ -83,7 +68,7 @@
> + EARLY_PGDS((vstart), (vend), add) /* each PGDIR needs a next level page table */ \
> + EARLY_PUDS((vstart), (vend), add) /* each PUD needs a next level page table */ \
> + EARLY_PMDS((vstart), (vend), add)) /* each PMD needs a next level page table */
> -#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR))
> +#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end, EXTRA_PAGE))
>
> /* the initial ID map may need two extra pages if it needs to be extended */
> #if VA_BITS < 48
Regardless
Reviewed-by: Anshuman Khandual <anshuman.khandual at arm.com>
More information about the linux-arm-kernel
mailing list