[PATCH v2 1/1] ARM: LPAE: Fix mapping in alloc_init_pte for unaligned addresses.
Laura Abbott
lauraa at codeaurora.org
Thu Mar 14 16:19:17 EDT 2013
On 3/13/2013 10:14 PM, Catalin Marinas wrote:
> (sorry for if you got this message twice, gmail's new reply method
> decided to send html)
>
> On 18 September 2012 12:52, R, Sricharan <r.sricharan at ti.com> wrote:
>> Hi,
>> On Fri, Aug 17, 2012 at 4:32 PM, R, Sricharan <r.sricharan at ti.com> wrote:
>>> Hi,
>>>
>>>> With LPAE, When either the start address or end address
>>>> or physical address to be mapped is unaligned,
>>>> alloc_init_section creates page granularity mappings.
>>>> alloc_init_section calls alloc_init_pte which populates
>>>> one pmd entry and sets up the ptes. But if the size is
>>>> greater than what can be mapped by one pmd entry,
>>>> then the rest remains unmapped.
>>>>
>>>> The issue becomes visible when LPAE is enabled, where we have
>>>> the 3 levels with seperate pgd and pmd's.
>>>> When a static mapping for 3MB is requested, only 2MB is mapped
>>>> and the remaining 1MB is unmapped. Fixing this here, by looping
>>>> in to map the entire unaligned address range.
>>>>
>>>> Boot tested on OMAP5 evm with both LPAE enabled/disabled
>>>> and verified that static mappings with unaligned addresses
>>>> are properly mapped.
>>>>
>>>> Signed-off-by: R Sricharan <r.sricharan at ti.com>
>>>> Reviewed-by: Santosh Shilimkar <santosh.shilimkar at ti.com>
>>>> Cc: Catalin Marinas <catalin.marinas at arm.com>
>>>> ---
>>>> [V2] Moved the loop to alloc_init_pte as per Russell's
>>>> feedback and changed the subject accordingly.
>>>> Using PMD_XXX instead of SECTION_XXX to avoid
>>>> different loop increments with/without LPAE.
>>>>
>>>> arch/arm/mm/mmu.c | 22 ++++++++++++++++++----
>>>> 1 file changed, 18 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
>>>> index cf4528d..0ed8808 100644
>>>> --- a/arch/arm/mm/mmu.c
>>>> +++ b/arch/arm/mm/mmu.c
>>>> @@ -585,11 +585,25 @@ static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr,
>>>> unsigned long end, unsigned long pfn,
>>>> const struct mem_type *type)
>>>> {
>>>> - pte_t *pte = early_pte_alloc(pmd, addr, type->prot_l1);
>>>> + unsigned long next;
>>>> + pte_t *pte;
>>>> + phys_addr_t phys;
>>>> +
>>>> do {
>>>> - set_pte_ext(pte, pfn_pte(pfn, __pgprot(type->prot_pte)), 0);
>>>> - pfn++;
>>>> - } while (pte++, addr += PAGE_SIZE, addr != end);
>>>> + if ((end-addr) & PMD_MASK)
>>>> + next = (addr + PMD_SIZE) & PMD_MASK;
>>>> + else
>>>> + next = end;
>>>> +
>>>> + pte = early_pte_alloc(pmd, addr, type->prot_l1);
>>>> + do {
>>>> + set_pte_ext(pte, pfn_pte(pfn,
>>>> + __pgprot(type->prot_pte)), 0);
>>>> + pfn++;
>>>> + } while (pte++, addr += PAGE_SIZE, addr != next);
>>>> +
>>>> + phys += next - addr;
>>>> + } while (pmd++, addr = next, addr != end);
>>>> }
>>>>
>>> ping..
>>
>> Ping again.
>> The issue is reproducible in mainline with CMA + LPAE enabled.
>> CMA tries to reserve/map 16 MB with 2 level table entries and
>> crashes in alloc_init_pte.
>>
>> This patch fixes that. Just posted a V3 of the same patch.
>>
>> https://patchwork.kernel.org/patch/1472031/
>
> I thought there was another patch where the looping was in an
> alloc_init_pmd() function, or there are just two different threads. I
> acked the other but not this one as I don't think looping over pmd
> inside the alloc_init_pte() function is the right thing.
>
I submitted a patch last week for what I think is the same issue ("arm:
mm: Populate initial page tables across sections") but I don't think I
ever saw any feedback on the patch. Do we have three patches floating
around fixing the same issue?
Laura
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
More information about the linux-arm-kernel
mailing list