[PATCH] arm64: mm: decrease the section size to reduce the memory reserved for the page map
Song Bao Hua (Barry Song)
song.bao.hua at hisilicon.com
Sun Dec 6 20:40:35 EST 2020
> -----Original Message-----
> From: Mike Rapoport [mailto:rppt at linux.ibm.com]
> Sent: Saturday, December 5, 2020 12:44 AM
> To: Will Deacon <will at kernel.org>
> Cc: liwei (CM) <liwei213 at huawei.com>; catalin.marinas at arm.com; fengbaopeng
> <fengbaopeng2 at hisilicon.com>; nsaenzjulienne at suse.de; steve.capper at arm.com;
> Song Bao Hua (Barry Song) <song.bao.hua at hisilicon.com>;
> linux-arm-kernel at lists.infradead.org; linux-kernel at vger.kernel.org; butao
> <butao at hisilicon.com>
> Subject: Re: [PATCH] arm64: mm: decrease the section size to reduce the memory
> reserved for the page map
>
> On Fri, Dec 04, 2020 at 11:13:47AM +0000, Will Deacon wrote:
> > On Fri, Dec 04, 2020 at 09:44:43AM +0800, Wei Li wrote:
> > > For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP
> > > do not free the reserved memory for the page map, decrease the section
> > > size can reduce the waste of reserved memory.
> > >
> > > Signed-off-by: Wei Li <liwei213 at huawei.com>
> > > Signed-off-by: Baopeng Feng <fengbaopeng2 at hisilicon.com>
> > > Signed-off-by: Xia Qing <saberlily.xia at hisilicon.com>
> > > ---
> > > arch/arm64/include/asm/sparsemem.h | 2 +-
> > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/arch/arm64/include/asm/sparsemem.h
> b/arch/arm64/include/asm/sparsemem.h
> > > index 1f43fcc79738..8963bd3def28 100644
> > > --- a/arch/arm64/include/asm/sparsemem.h
> > > +++ b/arch/arm64/include/asm/sparsemem.h
> > > @@ -7,7 +7,7 @@
> > >
> > > #ifdef CONFIG_SPARSEMEM
> > > #define MAX_PHYSMEM_BITS CONFIG_ARM64_PA_BITS
> > > -#define SECTION_SIZE_BITS 30
> > > +#define SECTION_SIZE_BITS 27
> >
> > We chose '30' to avoid running out of bits in the page flags. What changed?
>
> I think that for 64-bit there are still plenty of free bits. I didn't
> check now, but when I played with SPARSEMEM on m68k there were 8 bits
> for section out of 32.
>
> > With this patch, I can trigger:
> >
> > ./include/linux/mmzone.h:1170:2: error: Allocator MAX_ORDER exceeds
> SECTION_SIZE
> > #error Allocator MAX_ORDER exceeds SECTION_SIZE
> >
> > if I bump up NR_CPUS and NODES_SHIFT.
>
> I don't think it's related to NR_CPUS and NODES_SHIFT.
> This seems rather 64K pages that cause this.
>
> Not that is shouldn't be addressed.
Right now, only 4K PAGES will define ARM64_SWAPPER_USES_SECTION_MAPS.
Other cases will use vmemmap_populate_basepages().
The original patch should be only addressing the issue in 4K pages:
https://lore.kernel.org/lkml/20200812010655.96339-1-liwei213@huawei.com/
would we do something like the below?
#ifdef CONFIG_ARM64_4K_PAGE
#define SECTION_SIZE_BITS 27
#else
#define SECTION_SIZE_BITS 30
#endif
>
> > Will
>
> --
> Sincerely yours,
> Mike.
Thanks
Barry
More information about the linux-arm-kernel
mailing list