[PATCH v2 1/2] arm64: don't make early_*map() calls post paging_init()

Leif Lindholm leif.lindholm at linaro.org
Wed Jan 7 05:31:09 PST 2015


On Wed, Jan 07, 2015 at 01:13:06PM +0000, Ard Biesheuvel wrote:
> >> > -void __init efi_idmap_init(void)
> >> > +void __init efi_memmap_init(void)
> >> >  {
> >> > +   u64 mapsize;
> >> > +
> >> >     if (!efi_enabled(EFI_BOOT))
> >> >             return;
> >> >
> >> > +   /* replace early memmap mapping with permanent mapping */
> >> > +   mapsize = memmap.map_end - memmap.map;
> >> > +   memmap.map = (__force void *)ioremap_cache((phys_addr_t)memmap.phys_map,
> >> > +                                              mapsize);
> >>
> >> ioremap_cache() could potententially fail here if the phys_map address
> >> doesn't have a valid pfn (not in the kernel linear ram mapping) because
> >> some of the underlying vm support hasn't been initialized yet.
> >
> > Can you be more specific about the case you have in mind, please? pfn_valid
> > uses the memblocks on arm64, and that should all have been sorted out in
> > paging_init(). What's the issue that you're anticipating?
> 
> I think Mark's concern is that it is too early to call
> __get_free_page(), which is what happens if ioremap_cache() finds that
> the requested address is not covered by the existing linear mapping.
> Currently, UEFI reserved RAM regions are covered by the linear
> mapping, but that is something we intend to change in the future.

Which shouldn't be a problem, right? Since this function will be going
away with your "stable mappings" set, and the remap call bumped down
to an early initcall in arm64_enter_virtual_mode() (or potential
future name for that function).

/
    Leif



More information about the linux-arm-kernel mailing list