[PATCH v2 1/1] arm64: mm: correct the inside linear map boundaries during hotplug check
Ard Biesheuvel
ardb at kernel.org
Tue Feb 16 02:36:29 EST 2021
On Tue, 16 Feb 2021 at 04:12, Anshuman Khandual
<anshuman.khandual at arm.com> wrote:
>
>
>
> On 2/16/21 1:21 AM, Pavel Tatashin wrote:
> > On Mon, Feb 15, 2021 at 2:34 PM Ard Biesheuvel <ardb at kernel.org> wrote:
> >>
> >> On Mon, 15 Feb 2021 at 20:30, Pavel Tatashin <pasha.tatashin at soleen.com> wrote:
> >>>
> >>>> Can't we simply use signed arithmetic here? This expression works fine
> >>>> if the quantities are all interpreted as s64 instead of u64
> >>>
> >>> I was thinking about that, but I do not like the idea of using sign
> >>> arithmetics for physical addresses. Also, I am worried that someone in
> >>> the future will unknowingly change it to unsigns or to phys_addr_t. It
> >>> is safer to have start explicitly set to 0 in case of wrap.
> >>
> >> memstart_addr is already a s64 for this exact reason.
> >
> > memstart_addr is basically an offset and it must be negative. For
> > example, this would not work if it was not signed:
> > #define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
> >
> > However, on powerpc it is phys_addr_t type.
> >
> >>
> >> Btw, the KASLR check is incorrect: memstart_addr could also be
> >> negative when running the 52-bit VA kernel on hardware that is only
> >> 48-bit VA capable.
> >
> > Good point!
> >
> > if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (vabits_actual != 52))
> > memstart_addr -= _PAGE_OFFSET(48) - _PAGE_OFFSET(52);
> >
> > So, I will remove IS_ENABLED(CONFIG_RANDOMIZE_BASE) again.
> >
> > I am OK to change start_linear_pa, end_linear_pa to signed, but IMO
> > what I have now is actually safer to make sure that does not break
> > again in the future.
> An explicit check for the flip over and providing two different start
> addresses points would be required in order to use the new framework.
I don't think so. We no longer randomize over the same range, but take
the support PA range into account. (97d6786e0669d)
This should ensure that __pa(_PAGE_OFFSET(vabits_actual)) never
assumes a negative value. And to Pavel's point re 48/52 bit VAs: the
fact that vabits_actual appears in this expression means that it
already takes this into account, so you are correct that we don't have
to care about that here.
So even if memstart_addr could be negative, this expression should
never produce a negative value. And with the patch above applied, it
should never do so when running under KASLR either.
So question to Pavel and Tyler: could you please check whether you
have that patch, and whether it fixes the issue? It was introduced in
v5.11, and hasn't been backported yet (it wasn't marked for -stable)
More information about the linux-arm-kernel
mailing list