[PATCH] arm64: mm: decrease the section size to reduce the memory reserved for the page map

Mike Rapoport rppt at linux.ibm.com
Mon Dec 7 05:04:26 EST 2020


On Mon, Dec 07, 2020 at 10:49:26AM +0100, Ard Biesheuvel wrote:
> On Mon, 7 Dec 2020 at 10:42, Mike Rapoport <rppt at linux.ibm.com> wrote:
> >
> > On Mon, Dec 07, 2020 at 09:35:06AM +0000, Marc Zyngier wrote:
> > > On 2020-12-07 09:09, Ard Biesheuvel wrote:
> > > > (+ Marc)
> > > >
> > > > On Fri, 4 Dec 2020 at 12:14, Will Deacon <will at kernel.org> wrote:
> > > > >
> > > > > On Fri, Dec 04, 2020 at 09:44:43AM +0800, Wei Li wrote:
> > > > > > For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP
> > > > > > do not free the reserved memory for the page map, decrease the section
> > > > > > size can reduce the waste of reserved memory.
> > > > > >
> > > > > > Signed-off-by: Wei Li <liwei213 at huawei.com>
> > > > > > Signed-off-by: Baopeng Feng <fengbaopeng2 at hisilicon.com>
> > > > > > Signed-off-by: Xia Qing <saberlily.xia at hisilicon.com>
> > > > > > ---
> > > > > >  arch/arm64/include/asm/sparsemem.h | 2 +-
> > > > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > > >
> > > > > > diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h
> > > > > > index 1f43fcc79738..8963bd3def28 100644
> > > > > > --- a/arch/arm64/include/asm/sparsemem.h
> > > > > > +++ b/arch/arm64/include/asm/sparsemem.h
> > > > > > @@ -7,7 +7,7 @@
> > > > > >
> > > > > >  #ifdef CONFIG_SPARSEMEM
> > > > > >  #define MAX_PHYSMEM_BITS     CONFIG_ARM64_PA_BITS
> > > > > > -#define SECTION_SIZE_BITS    30
> > > > > > +#define SECTION_SIZE_BITS    27
> > > > >
> > > > > We chose '30' to avoid running out of bits in the page flags. What
> > > > > changed?
> > > > >
> > > > > With this patch, I can trigger:
> > > > >
> > > > > ./include/linux/mmzone.h:1170:2: error: Allocator MAX_ORDER exceeds
> > > > > SECTION_SIZE
> > > > > #error Allocator MAX_ORDER exceeds SECTION_SIZE
> > > > >
> > > > > if I bump up NR_CPUS and NODES_SHIFT.
> > > > >
> > > >
> > > > Does this mean we will run into problems with the GICv3 ITS LPI tables
> > > > again if we are forced to reduce MAX_ORDER to fit inside
> > > > SECTION_SIZE_BITS?
> > >
> > > Most probably. We are already massively constraint on platforms
> > > such as TX1, and dividing the max allocatable range by 8 isn't
> > > going to make it work any better...
> >
> > I don't think MAX_ORDER should shrink. Even if SECTION_SIZE_BITS is
> > reduced it should accomodate the existing MAX_ORDER.
> >
> > My two pennies.
> >
> 
> But include/linux/mmzone.h:1170 has this:
> 
> #if (MAX_ORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS
> #error Allocator MAX_ORDER exceeds SECTION_SIZE
> #endif
> 
> and Will managed to trigger it after applying this patch.

Right, because with 64K pages section size of 27 bits is not enough to
accomodate MAX_ORDER (2^13 pages of 64K).

Which means that definition of SECTION_SIZE_BITS should take MAX_ORDER
into account either statically with 

#ifdef ARM64_4K_PAGES
#define SECTION_SIZE_BITS <a number>
#elif ARM64_16K_PAGES
#define SECTION_SIZE_BITS <a larger number>
#elif ARM64_64K_PAGES
#define SECTION_SIZE_BITS <even larger number>
#else
#error "and what is the page size?"
#endif

or dynamically, like e.g. ia64 does:

#ifdef CONFIG_FORCE_MAX_ZONEORDER
#if ((CONFIG_FORCE_MAX_ZONEORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS)
#undef SECTION_SIZE_BITS
#define SECTION_SIZE_BITS (CONFIG_FORCE_MAX_ZONEORDER - 1 + PAGE_SHIFT)
#endif


-- 
Sincerely yours,
Mike.



More information about the linux-arm-kernel mailing list