[REGRESSION?] ARM: 7677/1: LPAE: Fix mapping in alloc_init_section for unaligned addresses (was Re: Memory size unaligned to section boundary)

Mark Rutland mark.rutland at arm.com
Tue May 5 09:11:12 PDT 2015


> > I wasn't able to come up with a DTB that would trigger this. Do you have
> > an example set of memory nodes + memreserves? Where are your kernel and
> > DTB loaded in memory?
> 
> We've a single memory node/bank from 0x40000000 to end of memory, we
> carve out a framebuffer at the end, and do not report that to Linux, so
> the end becomes 0x40000000 + memory-size - fb-size, and we use no mem
> reserves, because if we do use mem-reserves the mmap of the fbmem by
> the simplefb driver fails as that maps it non cach-able and if it is
> part of the main membank it already is mapped cachable.

Sure. The only reason for caring about any memreserves was in case they
inadvertently affected the memblock_limit or anything else generated by
iterating over the memblock array. If you have none to begin with then
they clearly aren't involved.

> We substract exactly the necessary fb-size, one known fb-size which
> triggers this is 1024x600 which means we end up substracting
> 1024x600x4 bytes from the end of memory, so effectively we are
> doing the same as passing a mem= argument which is not 2MiB aligned.

Thanks for the info.

It turns out my bootloader was silently rewriting the memory nodes,
which was why I couldn't reproduce the issue with a DTB alone. With the
memory node reg munged to <0 0x80000000 0 0x3FDA8000> without bootloader
interference, TC2 dies similarly to what you described.

As far as I can see the issue is not a regression; it looks like we'd
previously fail to use a (1M) section unless we had precisely 1M or 2M
of the section to map (as those are the only cases when end would be
section aligned).

The below hack prevents the issue by rounding the memblock_limit down to
a full (2M) pmd boundary, so we don't try to allocate from the first
section in a partial pmd. That does mean that if your memory ends on a
1M boundary you lose that last 1M for early memblock allocations.

Balancing the pmd manipulation turned out to be a lot more painful than
I'd anticipated, so I gave up on trying to map the first section in a
partial pmd. If people are happy with the below diff I can respin as a
patch (with comment updates and so on).

Thanks,
Mark.

---->8----
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4e6ef89..2ea13f0 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1125,9 +1125,9 @@ void __init sanity_check_meminfo(void)
                         * occurs before any free memory is mapped.
                         */
                        if (!memblock_limit) {
-                               if (!IS_ALIGNED(block_start, SECTION_SIZE))
+                               if (!IS_ALIGNED(block_start, PMD_SIZE))
                                        memblock_limit = block_start;
-                               else if (!IS_ALIGNED(block_end, SECTION_SIZE))
+                               else if (!IS_ALIGNED(block_end, PMD_SIZE))
                                        memblock_limit = arm_lowmem_limit;
                        }
 
@@ -1142,7 +1142,7 @@ void __init sanity_check_meminfo(void)
         * last full section, which should be mapped.
         */
        if (memblock_limit)
-               memblock_limit = round_down(memblock_limit, SECTION_SIZE);
+               memblock_limit = round_down(memblock_limit, PMD_SIZE);
        if (!memblock_limit)
                memblock_limit = arm_lowmem_limit;




More information about the linux-arm-kernel mailing list