[PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations

Ard Biesheuvel ardb at kernel.org
Fri Oct 30 11:22:37 EDT 2020


On Fri, 30 Oct 2020 at 16:18, Mike Rapoport <rppt at linux.ibm.com> wrote:
>
> Hi Ard,
>
> On Fri, Oct 30, 2020 at 10:29:16AM +0100, Ard Biesheuvel wrote:
> > (+ Mike)
> >
> > On Fri, 30 Oct 2020 at 03:25, Florian Fainelli <f.fainelli at gmail.com> wrote:
> > >
> > >
> > >
> > > On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
> > > > On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel <ardb at kernel.org> wrote:
> > > >>
> > > >> free_highpages() iterates over the free memblock regions in high
> > > >> memory, and marks each page as available for the memory management
> > > >> system. However, as it rounds the end of each region downwards, we
> > > >> may end up freeing a page that is memblock_reserve()d, resulting
> > > >> in memory corruption. So align the end of the range to the next
> > > >> page instead.
> > > >>
> > > >> Cc: <stable at vger.kernel.org>
> > > >> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> > > >> ---
> > > >>  arch/arm/mm/init.c | 2 +-
> > > >>  1 file changed, 1 insertion(+), 1 deletion(-)
> > > >>
> > > >> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> > > >> index a391804c7ce3..d41781cb5496 100644
> > > >> --- a/arch/arm/mm/init.c
> > > >> +++ b/arch/arm/mm/init.c
> > > >> @@ -354,7 +354,7 @@ static void __init free_highpages(void)
> > > >>         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
> > > >>                                 &range_start, &range_end, NULL) {
> > > >>                 unsigned long start = PHYS_PFN(range_start);
> > > >> -               unsigned long end = PHYS_PFN(range_end);
> > > >> +               unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
> > > >>
> > > >
> > > > Apologies, this should be
> > > >
> > > > -               unsigned long start = PHYS_PFN(range_start);
> > > > +               unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start));
> > > >                 unsigned long end = PHYS_PFN(range_end);
> > > >
> > > >
> > > > Strangely enough, the wrong version above also fixed the issue I was
> > > > seeing, but it is start that needs rounding up, not end.
> > >
> > > Is there a particular commit that you identified which could be used as
> > >  Fixes: tag to ease the back porting of such a change?
> >
> > Ah hold on. This appears to be a very recent regression, in
> > cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1.
> >
> > The old code was
> >
> > unsigned long start = memblock_region_memory_base_pfn(mem);
> >
> > which uses PFN_UP() to round up, whereas the new code rounds down.
> >
> > Looks like this is broken on a lot of platforms.
> >
> > Mike?
>
> I've reviewed again the whole series and it seems that only highmem
> initialization on arm and xtensa (that copied this code from arm) have
> this problem. I might have missed something again, though.
>
> So, to restore the original behaviour I think the fix should be
>
>         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
>                                 &range_start, &range_end, NULL) {
>                 unsigned long start = PHYS_UP(range_start);
>                 unsigned long end = PHYS_DOWN(range_end);
>
>

PHYS_UP and PHYS_DOWN don't exist.

Could you please send a patch that fixes this everywhere where it's broken?



More information about the linux-arm-kernel mailing list