Memory size unaligned to section boundary

Hans de Goede hdegoede at redhat.com
Sat May 9 06:54:00 PDT 2015


Hi,

On 09-05-15 15:48, Russell King - ARM Linux wrote:
> On Sat, May 09, 2015 at 03:38:16PM +0200, Hans de Goede wrote:
>> Ok, so does that mean that Mark's original patch:
>>
>> ---->8----
>> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
>> index 4e6ef89..2ea13f0 100644
>> --- a/arch/arm/mm/mmu.c
>> +++ b/arch/arm/mm/mmu.c
>> @@ -1125,9 +1125,9 @@ void __init sanity_check_meminfo(void)
>>                           * occurs before any free memory is mapped.
>>                           */
>>                          if (!memblock_limit) {
>> -                               if (!IS_ALIGNED(block_start, SECTION_SIZE))
>> +                               if (!IS_ALIGNED(block_start, PMD_SIZE))
>>                                          memblock_limit = block_start;
>> -                               else if (!IS_ALIGNED(block_end, SECTION_SIZE))
>> +                               else if (!IS_ALIGNED(block_end, PMD_SIZE))
>>                                          memblock_limit = arm_lowmem_limit;
>>                          }
>>
>> @@ -1142,7 +1142,7 @@ void __init sanity_check_meminfo(void)
>>           * last full section, which should be mapped.
>>           */
>>          if (memblock_limit)
>> -               memblock_limit = round_down(memblock_limit, SECTION_SIZE);
>> +               memblock_limit = round_down(memblock_limit, PMD_SIZE);
>>          if (!memblock_limit)
>>                  memblock_limit = arm_lowmem_limit;
>>
>>
>> Is good, or do we only need to have the last chunk of this patch ?
>
> That should do it, thanks.

"that should do it" means the entire patch or only the last chunk?

Regards,

Hans




More information about the linux-arm-kernel mailing list