[PATCH v2 3/4] arm64: mm: make vmemmap region a projection of the linear region

Geert Uytterhoeven geert at linux-m68k.org
Tue Nov 10 09:56:38 EST 2020


Hi Ard,

On Tue, Nov 10, 2020 at 3:09 PM Ard Biesheuvel <ardb at kernel.org> wrote:
> On Tue, 10 Nov 2020 at 14:10, Ard Biesheuvel <ardb at kernel.org> wrote:
> > On Tue, 10 Nov 2020 at 13:55, Geert Uytterhoeven <geert at linux-m68k.org> wrote:
> > > On Thu, Oct 8, 2020 at 5:43 PM Ard Biesheuvel <ardb at kernel.org> wrote:
> > > > Now that we have reverted the introduction of the vmemmap struct page
> > > > pointer and the separate physvirt_offset, we can simplify things further,
> > > > and place the vmemmap region in the VA space in such a way that virtual
> > > > to page translations and vice versa can be implemented using a single
> > > > arithmetic shift.
> > > >
> > > > One happy coincidence resulting from this is that the 48-bit/4k and
> > > > 52-bit/64k configurations (which are assumed to be the two most
> > > > prevalent) end up with the same placement of the vmemmap region. In
> > > > a subsequent patch, we will take advantage of this, and unify the
> > > > memory maps even more.
> > > >
> > > > Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> > >
> > > This is now commit 8c96400d6a39be76 ("arm64: mm: make vmemmap region a
> > > projection of the linear region") in arm64/for-next/core.
> > >
> > > > --- a/arch/arm64/mm/init.c
> > > > +++ b/arch/arm64/mm/init.c
> > > > @@ -504,6 +504,8 @@ static void __init free_unused_memmap(void)
> > > >   */
> > > >  void __init mem_init(void)
> > > >  {
> > > > +       BUILD_BUG_ON(!is_power_of_2(sizeof(struct page)));
> > >
> > > This check is triggering for me.
> > >
> > > If CONFIG_MEMCG=n, sizeof(struct page) = 56.
> > >
> > > If CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y, this is mitigated by
> > > the explicit alignment:
> > >
> > >     #ifdef CONFIG_HAVE_ALIGNED_STRUCT_PAGE
> > >     #define _struct_page_alignment  __aligned(2 * sizeof(unsigned long))
> > >     #else
> > >     #define _struct_page_alignment
> > >     #endif
> > >
> > >     struct page { ... } _struct_page_alignment;
> > >
> > > However, HAVE_ALIGNED_STRUCT_PAGE is selected only if SLUB,
> > > while my .config is using SLAB.
> > >
> >
> > Thanks for the report. I will look into this.
>
> OK, so we can obviously fix this easily by setting
> CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y unconditionally instead of only 'if
> SLUB'. The question is whether that is likely to lead to any
> performance regressions.
>
> So first of all, having a smaller struct page means we can fit more of
> them into memory. On a 4k pages config with SPARSEMEM_VMEMMAP enabled
> (which allocates struct pages in 2M blocks), every 2M block can cover
> 146 MB of DRAM instead of 128 MB. I'm not sure what kind of DRAM
> arrangement would be needed to take advantage of this in practice,
> though.

So this starts making a difference only for systems with more than 1 GiB
RAM, where we can probably afford losing 2 MiB.

> Another aspect is D-cache utilization: cache lines are typically 64
> bytes on arm64, and while we can improve Dcache utilization in theory
> (by virtue of the smaller size), the random access nature of struct
> pages may well result in the opposite, given that 3 out of 4 struct
> pages now straddle two cachelines.
>
> Given the above, and given the purpose of this patch series, which was
> to tidy up and unify different configurations, in order to reduce the
> size of the validation matrix, I think it would be reasonable to
> simply set CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y for arm64 in all cases.

Thanks, sounds reasonable to me.

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert at linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds



More information about the linux-arm-kernel mailing list