arm64: Linear mapping is mapped at the same static virtual address

Seth Jenkins sethjenkins at google.com
Fri Aug 1 13:01:25 PDT 2025


+linux-arm-kernel

Okay, given that this is already public knowledge and technically
working-as-intended, I derestricted the bug report.

> the decision whether or not to randomize the placement of the system's
> DRAM inside the linear map is based on the capabilities of the CPU
> rather than how much memory is present at boot time. This change was
> necessary because memory hotplug may result in DRAM appearing in places
> that are not covered by the linear region at all (and therefore
> unusable) if the decision is solely based on the memory map at boot.

If I understand correctly, given that the PA space is already larger
than the linear region size, it is *already* impossible to prevent
memory hotplug from theoretically causing DRAM to appear in a place
that the linear region does not cover. Please correct me if I'm wrong
but it sounds like removing randomization doesn't resolve this issue,
*and* it introduces an *exceptionally* useful exploitation
target/primitive that makes KASLR substantially weaker than it already
was.

If we really need to support the absolute maximum amount of memory
hotpluggable RAM and we can't sacrifice a couple GB of address space
to enable randomization by default, maybe we could consider having a
CONFIG option for it instead - and require that config option on
Android GKI if it's not an option just to remove hotplug support from
GKI entirely.

Jann off-handedly suggested having a CONFIG option that says "I will
never need more than X amount of physical RAM" which sounds reasonable
as well.

On Mon, Jul 28, 2025 at 6:33 AM Will Deacon <will at kernel.org> wrote:
>
> Hi Seth, [+Catalin]
>
> On Fri, Jul 25, 2025 at 01:43:57PM -0400, Seth Jenkins wrote:
> >    On arm64, when the kernel goes to map all of physical memory into the
> >    linear mapping, it attempts to perform some randomization within a range
> >    calculated via the following logic in arm64_memblock_init():
> >            s64 range = linear_region_size -
> >                    BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
> >    On arm64 systems that use 3-level paging (such as Android), the available
> >    virtual address space (linear_region_size) is smaller than the physical
> >    address space size supported by the CPU
> >    (BIT(id_aa64mmfr0_parange_to_phys_shift(parange))). This means
> >    the range is ultimately negative, no ASLR is performed at all, and the
> >    linear mapping is placed at the same static virtual address every boot.
>
> We've been aware of this limitation for a while but we've not been able
> to improve the situation for kernels where memory hotplug is enabled
> (and that includes the Android GKI kernel used by Pixel).
>
> Consequently, we gave up recently and removed the randomisation entirely
> in 1db780bafa4c ("arm64/mm: Remove randomization of the linear map") so
> it would be better just to discuss this on the list (linux-arm-kernel@)
> if you have any suggestions for how we could perform the randomisation
> effectively in this case.
>
> Cheers,
>
> Will



More information about the linux-arm-kernel mailing list