[PATCH v2 5/9] arm64: mm: move vmemmap region right below the linear region
Ard Biesheuvel
ard.biesheuvel at linaro.org
Tue Mar 1 07:43:46 PST 2016
On 1 March 2016 at 16:39, Catalin Marinas <catalin.marinas at arm.com> wrote:
> On Mon, Feb 29, 2016 at 03:44:40PM +0100, Ard Biesheuvel wrote:
>> @@ -404,6 +404,12 @@ void __init mem_init(void)
>> BUILD_BUG_ON(TASK_SIZE_32 > TASK_SIZE_64);
>> #endif
>>
>> + /*
>> + * Make sure we chose the upper bound of sizeof(struct page)
>> + * correctly.
>> + */
>> + BUILD_BUG_ON(sizeof(struct page) > (1 << STRUCT_PAGE_MAX_SHIFT));
>
> Since with the vmemmap fix you already assume that PAGE_OFFSET is half
> of the VA space, we should add another check on PAGE_OFFSET !=
> UL(0xffffffffffffffff) << (VA_BITS - 1), just in case someone thinks
> they could map a bit of extra RAM without going for a larger VA.
>
Indeed. The __pa() check only checks a single bit, so it must be split
exactly in half, unless we want to revisit that in the future (if
__pa() is no longer on a hot path after changes like these).
More information about the linux-arm-kernel
mailing list