[PATCH v5sub1 7/8] arm64: move kernel image to base of vmalloc area
Ard Biesheuvel
ard.biesheuvel at linaro.org
Fri Feb 12 10:01:49 PST 2016
On 12 February 2016 at 18:47, James Morse <james.morse at arm.com> wrote:
> Hi Ard,
>
> On 01/02/16 10:54, Ard Biesheuvel wrote:
>> This moves the module area to right before the vmalloc area, and
>> moves the kernel image to the base of the vmalloc area. This is
>> an intermediate step towards implementing KASLR, which allows the
>> kernel image to be located anywhere in the vmalloc area.
>
> I've rebased hibernate onto for-next/core, and this patch leads to the hibernate
> core code falling down a kernel shaped hole in the linear map.
>
> The hibernate code assumes that for zones returned by for_each_populated_zone(),
> if pfn_valid() says a page is present, then it is okay to access the page via
> page_address(pfn_to_page(pfn)). But for pfns that correspond to the kernel text,
> this is still returning an address in the linear map, which isn't mapped...
>
> I'm not sure what the correct fix is here.
> Should this sort of walk be valid?
>
I think the correct fix would be to mark the [_stext, _etext] interval
as NOMAP. That will also simplify the mapping routine where I now
check manually whether a memblock intersects that interval. And it
should make this particular piece of code behave.
However, you would still need to preserve the contents of the
interval, since the generic hibernate routines will not do that
anymore after this change.
I will experiment with this on Monday, and report back.
Thanks,
Ard.
>
> From include/linux/mm.h:
>> static __always_inline void *lowmem_page_address(const struct page *page)
>> {
>> return __va(PFN_PHYS(page_to_pfn(page)));
>> }
>
>
> Suggestions welcome!
>
>
> Thanks,
>
> James
More information about the linux-arm-kernel
mailing list