[PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations
Florian Fainelli
f.fainelli at gmail.com
Thu Oct 29 22:25:46 EDT 2020
On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
> On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel <ardb at kernel.org> wrote:
>>
>> free_highpages() iterates over the free memblock regions in high
>> memory, and marks each page as available for the memory management
>> system. However, as it rounds the end of each region downwards, we
>> may end up freeing a page that is memblock_reserve()d, resulting
>> in memory corruption. So align the end of the range to the next
>> page instead.
>>
>> Cc: <stable at vger.kernel.org>
>> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
>> ---
>> arch/arm/mm/init.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
>> index a391804c7ce3..d41781cb5496 100644
>> --- a/arch/arm/mm/init.c
>> +++ b/arch/arm/mm/init.c
>> @@ -354,7 +354,7 @@ static void __init free_highpages(void)
>> for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
>> &range_start, &range_end, NULL) {
>> unsigned long start = PHYS_PFN(range_start);
>> - unsigned long end = PHYS_PFN(range_end);
>> + unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
>>
>
> Apologies, this should be
>
> - unsigned long start = PHYS_PFN(range_start);
> + unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start));
> unsigned long end = PHYS_PFN(range_end);
>
>
> Strangely enough, the wrong version above also fixed the issue I was
> seeing, but it is start that needs rounding up, not end.
Is there a particular commit that you identified which could be used as
Fixes: tag to ease the back porting of such a change?
--
Florian
More information about the linux-arm-kernel
mailing list