bug in identity map for 4KB pages?

Mark Rutland mark.rutland at arm.com
Wed Jul 29 06:24:12 PDT 2015


On Wed, Jul 29, 2015 at 01:57:01PM +0100, Ard Biesheuvel wrote:
> On 29 July 2015 at 14:53, Mark Rutland <mark.rutland at arm.com> wrote:
> > On Wed, Jul 29, 2015 at 12:49:53PM +0100, Ard Biesheuvel wrote:
> >> On 29 July 2015 at 13:42, Mark Rutland <mark.rutland at arm.com> wrote:
> >> > On Wed, Jul 29, 2015 at 08:47:10AM +0100, Ard Biesheuvel wrote:
> >> >> On 29 July 2015 at 04:37, Stuart Yoder <stuart.yoder at freescale.com> wrote:
> >> >> > Our system has RAM at a high address, and previously required using 48-bit VA
> >> >> > in order to have an idmap that covered all of RAM.
> >> >> >
> >> >> > In testing on 4.2-rc4, which now contains support for the increased VA range
> >> >> > of the idmap without using 48-bit VA, I'm finding that things work for
> >> >> > 64KB pages, but do not for 4KB pages.
> >> >> >
> >> >> > Is there any known limitation here with 4KB pages?  Any ideas?
> >> >> >
> >> >>
> >> >> You probably have memory at 0x8000_0000 and at 0x80_8000_0000, right?
> >> >> So the physical arrangement still requires more than the 39 bits of
> >> >> virtual address space you get with 3 levels, even if the ID map can
> >> >> cope now. That is why you get the 0x40_0000_0000 virtual address:
> >> >> __phys_to_virt() just wraps to a positive number.
> >> >
> >> > Ah, I see. So it's the linear mapping rather than the idmap which is the
> >> > problem.
> >> >
> >> > We cut memory which would fall below the start of the linear map in
> >> > early_init_dt_add_memory_arch, by cutting memory below phys_offset.
> >> >
> >> > It looks like we already have the logic for cutting memory beyond the
> >> > end of the linear map, so we should just need to override the limit.
> >> >
> >> > Stuart, does the below patch prevent the panic you see?
> >> >
> >>
> >> Wouldn't something like this make more sense?
> >>
> >> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> >> index 597831bdddf3..64480b65ef17 100644
> >> --- a/arch/arm64/mm/init.c
> >> +++ b/arch/arm64/mm/init.c
> >> @@ -158,6 +158,15 @@ early_param("mem", early_mem);
> >>
> >>  void __init arm64_memblock_init(void)
> >>  {
> >> +       /*
> >> +        * Remove the memory that we will not be able to cover
> >> +        * with the linear mapping.
> >> +        */
> >> +       const s64 linear_region_size = -(s64)PAGE_OFFSET;
> >> +
> >> +       memblock_remove(0, memstart_addr);
> >> +       memblock_remove(memstart_addr + linear_region_size, ULLONG_MAX);
> >
> > The maths is more correct, certainly.
> >
> > I'd prefer to handle the truncation in early_init_dt_add_memory_arch
> > because it already has the requisite logic, and keeping the truncation
> > in one place should keep things less confusing.
> >
> > To that end I'll respin my patch using PAGE_OFFSET to determine the
> > physical memory limit.
> >
> > Unless there's something I've missed?
> >
> 
> I don't think so. But perhaps it is just as easy just to override
> early_init_dt_add_memory_arch() now since I will need to do that
> anyway if the linear mapping changes ever land.
> 
> http://article.gmane.org/gmane.linux.kernel.efi/5744

That probably makes sense, yes.

We'll need to log when we actually truncate things, though. If we can do
that, then overriding early_init_dt_add_memory_arch (and having
arm64_memblock_init do the truncation) makes sense to me.

Mark.



More information about the linux-arm-kernel mailing list