SPARSEMEM memory needs?
Joakim Tjernlund
Joakim.Tjernlund at infinera.com
Tue Jun 7 00:39:26 PDT 2022
On Tue, 2022-06-07 at 09:15 +0200, Ard Biesheuvel wrote:
> On Tue, 7 Jun 2022 at 08:36, Joakim Tjernlund
> <Joakim.Tjernlund at infinera.com> wrote:
> >
> > On Mon, 2022-06-06 at 23:50 +0200, Ard Biesheuvel wrote:
> > > On Mon, 6 Jun 2022 at 22:37, Joakim Tjernlund
> > > <Joakim.Tjernlund at infinera.com> wrote:
> > > >
> > > > On Mon, 2022-06-06 at 21:10 +0200, Ard Biesheuvel wrote:
> > > > > Hello Joakim,
> > > > >
> > > > > On Mon, 6 Jun 2022 at 17:20, Joakim Tjernlund
> > > > > <Joakim.Tjernlund at infinera.com> wrote:
> > > > > >
> > > > > > I am trying to reduce RAM used by the kernel and enabled memblock debug and found these(annotated with BT here):
> > > > > >
> > > > > > [ 0.000000] memblock_alloc_exact_nid_raw: 4194304 bytes align=0x200000 nid=0 from=0x0000000040000000 max_addr=0x0000000000000000 memmap_alloc+0x1c/0x2c
> > > > > > [ 0.000000] memblock_reserve: [0x0000000061c00000-0x0000000061ffffff] memblock_alloc_range_nid+0xc8/0x134
> > > > > > [ 0.000000] ------------[ cut here ]------------
> > > > > > [ 0.000000] Call trace:
> > > > > > [ 0.000000] vmemmap_alloc_block+0xc4/0xe8
> > > > > > [ 0.000000] vmemmap_pud_populate+0x24/0xb8
> > > > > > [ 0.000000] vmemmap_populate+0xa4/0x180
> > > > > > [ 0.000000] __populate_section_memmap+0x50/0x70
> > > > > > [ 0.000000] sparse_init_nid+0x164/0x1d4
> > > > > > [ 0.000000] sparse_init+0xb0/0x224
> > > > > > [ 0.000000] bootmem_init+0x40/0x80
> > > > > > [ 0.000000] setup_arch+0x244/0x540
> > > > > > [ 0.000000] start_kernel+0x60/0x804
> > > > > > [ 0.000000] __primary_switched+0xa0/0xa8
> > > > > > [ 0.000000] ---[ end trace 0000000000000000 ]---
> > > > > > [ 0.000000] memblock_alloc_try_nid_raw: 2097152 bytes align=0x200000 nid=0 from=0x0000000040000000 max_addr=0x0000000000000000 __earlyonly_bootmem_alloc+0x20/0x28
> > > > > > [ 0.000000] memblock_reserve: [0x0000000061a00000-0x0000000061bfffff] memblock_alloc_range_nid+0xc8/0x134
> > > > >
> > > > > I'm not sure which backtrace belongs with which memblock debug
> > > > > message, but something looks wrong here. vmemmap_pud_populate() does
> > > > > an allocation of PAGE_SIZE, but your kernel is allocating 2 megabytes
> > > > > here.
> > > >
> > > > I have added this to get a BT, I trimmed the regs ways and just kept the BT
> > > > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> > > > index bdce883f9286..f8326e1295ed 100644
> > > > --- a/mm/sparse-vmemmap.c
> > > > +++ b/mm/sparse-vmemmap.c
> > > > @@ -418,9 +418,11 @@ void * __meminit vmemmap_alloc_block(unsigned long size, int node)
> > > > warned = true;
> > > > }
> > > > return NULL;
> > > > - } else
> > > > + } else {
> > > > + WARN_ON(1);
> > > > return __earlyonly_bootmem_alloc(node, size, size,
> > > > __pa(MAX_DMA_ADDRESS));
> > > > + }
> > > > }
> > > >
> > > >
> > > > I guess I may have trimmed the log a bit too much. How does this look?
> > > >
> > > > [ 0.000000] memblock_alloc_exact_nid_raw: 4194304 bytes align=0x200000 nid=0 from=0x0000000040000000 max_addr=0x0000000000000000 memmap_alloc+0x1c/0x2c
> > > > [ 0.000000] memblock_reserve: [0x0000000061c00000-0x0000000061ffffff] memblock_alloc_range_nid+0xc8/0x134
> > >
> > > OK, so this one is unaccounted for.
> >
> > Need to find out where this is coming from then, thanks.
> >
> ...
> > OK, so every section of RAM costs 2MB to administer. I guess there is nothing one can do about that?
> > The one think that his my mind is that we would be happy with ARM64_PA_BITS_32, we don't have any
> > addresses above that in this small system.
> >
>
> I don't see how that is going to help.
>
> > >
> > > So the problem is that you only have 36 MB of DRAM, with a large hole
> > > in the middle. Sparsemem was actually designed for that (hence the
> > > name), and flatmem would make things much worse.
> >
> > Yes, that hole is not ideal.
>
> What you might try is changing the section size to 32 MB and mapping
> the vmemmap region down to pages. That way, the vmemmap region should
> only take up
> - 512 KiB for the struct page array[] itself
> - 4 KiB for the page table that replaces the 2 MB block mapping
>
> You could try the below and see if it makes any difference?
>
> diff --git a/arch/arm64/include/asm/sparsemem.h
> b/arch/arm64/include/asm/sparsemem.h
> index 4b73463423c3..a008f4342532 100644
> --- a/arch/arm64/include/asm/sparsemem.h
> +++ b/arch/arm64/include/asm/sparsemem.h
> @@ -23,7 +23,7 @@
> * entries could not be created for vmemmap mappings.
> * 16K follows 4K for simplicity.
> */
> -#define SECTION_SIZE_BITS 27
> +#define SECTION_SIZE_BITS 25
> #endif /* CONFIG_ARM64_64K_PAGES */
>
> #endif
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 5b1946f1805c..d25560a53a67 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1196,7 +1196,7 @@ static void free_empty_tables(unsigned long
> addr, unsigned long end,
> }
> #endif
>
> -#if !ARM64_KERNEL_USES_PMD_MAPS
> +#if 1// !ARM64_KERNEL_USES_PMD_MAPS
> int __meminit vmemmap_populate(unsigned long start, unsigned long
> end, int node,
> struct vmem_altmap *altmap)
> {
That was a really good idea, now I have:
Memory: 29732K/36864K available (3648K kernel code, 698K rwdata, 936K rodata, 320K init, 255K bss, 7132K reserved, 0K cma-reserved)
Reserved dropped from 14+MB to 7+MB :)
Should I look into something particular before testing this in a bigger scale?
Jocke
More information about the linux-arm-kernel
mailing list