[PATCH] arm64: mm: Make randomization works again in some case

Ard Biesheuvel ardb at kernel.org
Wed Dec 15 01:26:55 PST 2021


On Fri, 10 Dec 2021 at 12:56, Catalin Marinas <catalin.marinas at arm.com> wrote:
>
> On Thu, Nov 04, 2021 at 02:27:47PM +0800, Kefeng Wang wrote:
> > After commit 97d6786e0669 ("arm64: mm: account for hotplug memory when
> > randomizing the linear region"), the KASLR could not work well in some
> > case, eg, without memory hotplug and with va=39/pa=44, that is, linear
> > region size < CPU's addressable PA range, the KASLR fails now but could
> > work before this commit. Let's calculate pa range by memblock end/start
> > without CONFIG_RANDOMIZE_BASE.
> >
> > Meanwhile, let's add a warning message if linear region size is too small
> > for randomization.
> >
> > Signed-off-by: Kefeng Wang <wangkefeng.wang at huawei.com>
> > ---
> > Hi Ard, one more question, the parange from mmfr0 register may also too large,
> > then even with this patch, the randomization still could not work.
> >
> > If we know the max physical memory range(including hotplug memory), could we
> > add a way(maybe cmdline) to set max parange, then we could make randomization
> > works in more cases, any thought?
> >
> >  arch/arm64/mm/init.c | 30 +++++++++++++++++++++---------
> >  1 file changed, 21 insertions(+), 9 deletions(-)
> >
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index a8834434af99..27ec7f2c6fdb 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -284,21 +284,33 @@ void __init arm64_memblock_init(void)
> >
> >       if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
> >               extern u16 memstart_offset_seed;
> > -             u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
> > -             int parange = cpuid_feature_extract_unsigned_field(
> > -                                     mmfr0, ID_AA64MMFR0_PARANGE_SHIFT);
> > -             s64 range = linear_region_size -
> > -                         BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
> > +             s64 range;
> > +
> > +             if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG)) {
> > +                     u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
> > +                     int parange = cpuid_feature_extract_unsigned_field(
> > +                                             mmfr0, ID_AA64MMFR0_PARANGE_SHIFT);
> > +                     range = linear_region_size -
> > +                             BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
> > +
> > +             } else {
> > +                     range = linear_region_size -
> > +                             (memblock_end_of_DRAM() - memblock_start_of_DRAM());
> > +             }
>
> I'm not a big fan of making this choice depend on memory hotplug. Could
> we instead just limit the randomisation to the minimum of va bits and pa
> bits? We can keep the warning.
>

Not sure I follow. We currently have

linear_region_size = PAGE_END - _PAGE_OFFSET(vabits_actual);

range = linear_region_size - BIT(id_aa64mmfr0_parange_to_phys_shift(parange));

so the randomization range is defined by how much the VA range exceeds
the PA range

Currently, no randomization is possible if VA range <= PA range, even
if the actual upper bound on the populated physical addresses is much
lower.

So the question is, can we find another way to define this upper bound
if we cannot base it on memblock. Another option, suggested by Kefeng,
is to override PArange in ID_AA64MMFR0_EL1 similar to how we override
other system registers.

Another thing we might consider is to
- get rid of the 1 GB granularity of the randomization, which is based
on the assumption that granular mappings in the linear range are bad,
but now that rodata=full is the default, that does not really apply
anymore either,
- use PArange - memblock_start_of_DRAM() as the physical range, so
that we have some wiggle room even when using 48 bits of PA.

I'll have a stab at coding this up today.



More information about the linux-arm-kernel mailing list