[PATCH v2 1/1] arm64: mm: correct the inside linear map boundaries during hotplug check
anshuman.khandual at arm.com
Mon Feb 15 22:12:26 EST 2021
On 2/16/21 1:21 AM, Pavel Tatashin wrote:
> On Mon, Feb 15, 2021 at 2:34 PM Ard Biesheuvel <ardb at kernel.org> wrote:
>> On Mon, 15 Feb 2021 at 20:30, Pavel Tatashin <pasha.tatashin at soleen.com> wrote:
>>>> Can't we simply use signed arithmetic here? This expression works fine
>>>> if the quantities are all interpreted as s64 instead of u64
>>> I was thinking about that, but I do not like the idea of using sign
>>> arithmetics for physical addresses. Also, I am worried that someone in
>>> the future will unknowingly change it to unsigns or to phys_addr_t. It
>>> is safer to have start explicitly set to 0 in case of wrap.
>> memstart_addr is already a s64 for this exact reason.
> memstart_addr is basically an offset and it must be negative. For
> example, this would not work if it was not signed:
> #define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
> However, on powerpc it is phys_addr_t type.
>> Btw, the KASLR check is incorrect: memstart_addr could also be
>> negative when running the 52-bit VA kernel on hardware that is only
>> 48-bit VA capable.
> Good point!
> if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (vabits_actual != 52))
> memstart_addr -= _PAGE_OFFSET(48) - _PAGE_OFFSET(52);
> So, I will remove IS_ENABLED(CONFIG_RANDOMIZE_BASE) again.
> I am OK to change start_linear_pa, end_linear_pa to signed, but IMO
> what I have now is actually safer to make sure that does not break
> again in the future.
An explicit check for the flip over and providing two different start
addresses points would be required in order to use the new framework.
More information about the linux-arm-kernel