[PATCH] arm64: mm: account for hotplug memory when randomizing the linear region
Ard Biesheuvel
ardb at kernel.org
Sat Oct 17 08:39:11 EDT 2020
On Wed, 14 Oct 2020 at 10:19, Ard Biesheuvel <ardb at kernel.org> wrote:
>
> As a hardening measure, we currently randomize the placement of
> physical memory inside the linear region when KASLR is in effect.
> Since the random offset at which to place the available physical
> memory inside the linear region is chosen early at boot, it is
> based on the memblock description of memory, which does not cover
> hotplug memory. The consequence of this is that the randomization
> offset may be chosen such that any hotplugged memory located above
> memblock_end_of_DRAM() that appears later is pushed off the end of
> the linear region, where it cannot be accessed.
>
> So let's limit this randomization of the linear region to ensure
> that this can no longer happen, by using the CPU's addressable PA
> range instead. As it is guaranteed that no hotpluggable memory will
> appear that falls outside of that range, we can safely put this PA
> range sized window anywhere in the linear region.
>
> Cc: Anshuman Khandual <anshuman.khandual at arm.com>
> Cc: Catalin Marinas <catalin.marinas at arm.com>
> Cc: Will Deacon <will at kernel.org>
> Cc: Steven Price <steven.price at arm.com>
> Cc: Robin Murphy <robin.murphy at arm.com>
> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> ---
> Related to discussion here:
> https://lore.kernel.org/linux-arm-kernel/1600332402-30123-1-git-send-email-anshuman.khandual@arm.com/
>
> arch/arm64/mm/init.c | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 0b393c275be0..af1b4ed2daa8 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -350,13 +350,16 @@ void __init arm64_memblock_init(void)
>
> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
> extern u16 memstart_offset_seed;
> - u64 range = linear_region_size -
> - (memblock_end_of_DRAM() - memblock_start_of_DRAM());
> + u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
> + int parange = cpuid_feature_extract_unsigned_field(
> + mmfr0, ID_AA64MMFR0_PARANGE_SHIFT);
> + s64 range = linear_region_size -
> + BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
>
> /*
> * If the size of the linear region exceeds, by a sufficient
> - * margin, the size of the region that the available physical
> - * memory spans, randomize the linear region as well.
> + * margin, the size of the region that the physical memory can
> + * span, randomize the linear region as well.
> */
> if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
> range /= ARM64_MEMSTART_ALIGN;
The comparison here should be modified to read
range >= (s64)ARM64_MEMSTART_ALIGN
or the LHS will get promoted to u64, which will yield the wrong result.
More information about the linux-arm-kernel
mailing list