[PATCH V5] ARM: LPAE: Fix mapping in alloc_init_section for unaligned addresses

R Sricharan r.sricharan at ti.com
Mon Feb 11 04:10:19 EST 2013


On Friday 08 February 2013 08:13 PM, Christoffer Dall wrote:
> On Fri, Feb 8, 2013 at 6:30 AM, R Sricharan <r.sricharan at ti.com> wrote:
>> With LPAE enabled, alloc_init_section() does not map the entire
>> address space for unaligned addresses.
>>
>> The issue also reproduced with CMA + LPAE. CMA tries to map 16MB
>> with page granularity mappings during boot. alloc_init_pte()
>> is called and out of 16MB, only 2MB gets mapped and rest remains
>> unaccessible.
>>
>> Because of this OMAP5 boot is broken with CMA + LPAE enabled.
>> Fix the issue by ensuring that the entire addresses are
>> mapped.
>>
>> Signed-off-by: R Sricharan <r.sricharan at ti.com>
>> Cc: Catalin Marinas <catalin.marinas at arm.com>
>> Cc: Christoffer Dall <chris at cloudcar.com>
>> Cc: Russell King <linux at arm.linux.org.uk>
>> Acked-by: Santosh Shilimkar <santosh.shilimkar at ti.com>
>> Tested-by: Christoffer Dall <chris at cloudcar.com>
>
> I don't think that I've tested this version of the patch, but ok.
>
>> ---
>>   [V2] Moved the loop to alloc_init_pte as per Russell's
>>       feedback and changed the subject accordingly.
>>       Using PMD_XXX instead of SECTION_XXX to avoid
>>       different loop increments with/without LPAE.
>>
>>   [v3] Removed the dummy variable phys and updated
>>        the commit log for CMA case.
>>
>>   [v4] Resending with updated change log and
>>        updating the tags.
>>
>>   [v5] Renamed alloc_init_section to alloc_init_pmd
>>        and moved the loop back there. Also introduced
>>        map_init_section as per Catalin's comments.
>>
>>   arch/arm/mm/mmu.c |   66 ++++++++++++++++++++++++++++++++---------------------
>>   1 file changed, 40 insertions(+), 26 deletions(-)
>>
>> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
>> index ce328c7..a89df2b 100644
>> --- a/arch/arm/mm/mmu.c
>> +++ b/arch/arm/mm/mmu.c
>> @@ -576,39 +576,53 @@ static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr,
>>          } while (pte++, addr += PAGE_SIZE, addr != end);
>>   }
>>
>> -static void __init alloc_init_section(pud_t *pud, unsigned long addr,
>> +static void __init map_init_section(pmd_t *pmd, unsigned long addr,
>> +                       unsigned long end, phys_addr_t phys,
>> +                       const struct mem_type *type)
>> +{
>> +#ifndef CONFIG_ARM_LPAE
>> +       if (addr & SECTION_SIZE)
>> +               pmd++;
>> +#endif
>
> I still think this one deserves a comment.
  Ok, added it in updated version. In fact i was getting
  little verbose in describing this, but not sure if there
  is much simpler way.
>
>> +       do {
>> +               *pmd = __pmd(phys | type->prot_sect);
>> +               phys += SECTION_SIZE;
>> +       } while (pmd++, addr += SECTION_SIZE, addr != end);
>> +
>> +       flush_pmd_entry(pmd);
>> +}
>> +
>> +static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
>>                                        unsigned long end, phys_addr_t phys,
>>                                        const struct mem_type *type)
>>   {
>>          pmd_t *pmd = pmd_offset(pud, addr);
>> +       unsigned long next;
>>
>> -       /*
>> -        * Try a section mapping - end, addr and phys must all be aligned
>> -        * to a section boundary.  Note that PMDs refer to the individual
>> -        * L1 entries, whereas PGDs refer to a group of L1 entries making
>> -        * up one logical pointer to an L2 table.
>> -        */
>> -       if (type->prot_sect && ((addr | end | phys) & ~SECTION_MASK) == 0) {
>> -               pmd_t *p = pmd;
>> -
>> -#ifndef CONFIG_ARM_LPAE
>> -               if (addr & SECTION_SIZE)
>> -                       pmd++;
>> -#endif
>> -
>> -               do {
>> -                       *pmd = __pmd(phys | type->prot_sect);
>> -                       phys += SECTION_SIZE;
>> -               } while (pmd++, addr += SECTION_SIZE, addr != end);
>> +       do {
>> +               /*
>> +                * With LPAE, we may be required to loop in, to map
>
> nit: "loop in" should just be loop. Also, I would not say that we may
> be required to loop in the LPAE case, but that for LPAE we must loop
> over all the PMDs for the given range.
  ok. correct.
>
>> +                * all the pmds for the given range.
>> +                */
>> +               next = pmd_addr_end(addr, end);
>>
>> -               flush_pmd_entry(p);
>> -       } else {
>>                  /*
>> -                * No need to loop; pte's aren't interested in the
>> -                * individual L1 entries.
>> +                * Try a section mapping - end, addr and phys must all be
>> +                * aligned to a section boundary.  Note that PMDs refer to
>> +                * the individual L1 entries, whereas PGDs refer to a group
>> +                * of L1 entries making up one logical pointer to an L2 table.
>
> This is not true for LPAE, so now when we're changing the comment, we
> should be more specific.
    yes, probably the second line of the comment is not common.
    i am using it to describe this block now.

	#ifndef CONFIG_ARM_LPAE
        	  if (addr & SECTION_SIZE)
                pmd++;
	#endif

Regards,
  Sricharan



More information about the linux-arm-kernel mailing list