[PATCH v2 3/4] arm64: mm: make vmemmap region a projection of the linear region

Catalin Marinas catalin.marinas at arm.com
Tue Nov 10 10:39:49 EST 2020


On Tue, Nov 10, 2020 at 03:08:45PM +0100, Ard Biesheuvel wrote:
> On Tue, 10 Nov 2020 at 14:10, Ard Biesheuvel <ardb at kernel.org> wrote:
> > On Tue, 10 Nov 2020 at 13:55, Geert Uytterhoeven <geert at linux-m68k.org> wrote:
> > > On Thu, Oct 8, 2020 at 5:43 PM Ard Biesheuvel <ardb at kernel.org> wrote:
> > > > Now that we have reverted the introduction of the vmemmap struct page
> > > > pointer and the separate physvirt_offset, we can simplify things further,
> > > > and place the vmemmap region in the VA space in such a way that virtual
> > > > to page translations and vice versa can be implemented using a single
> > > > arithmetic shift.
> > > >
> > > > One happy coincidence resulting from this is that the 48-bit/4k and
> > > > 52-bit/64k configurations (which are assumed to be the two most
> > > > prevalent) end up with the same placement of the vmemmap region. In
> > > > a subsequent patch, we will take advantage of this, and unify the
> > > > memory maps even more.
> > > >
> > > > Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> > >
> > > This is now commit 8c96400d6a39be76 ("arm64: mm: make vmemmap region a
> > > projection of the linear region") in arm64/for-next/core.
> > >
> > > > --- a/arch/arm64/mm/init.c
> > > > +++ b/arch/arm64/mm/init.c
> > > > @@ -504,6 +504,8 @@ static void __init free_unused_memmap(void)
> > > >   */
> > > >  void __init mem_init(void)
> > > >  {
> > > > +       BUILD_BUG_ON(!is_power_of_2(sizeof(struct page)));
> > >
> > > This check is triggering for me.
> > >
> > > If CONFIG_MEMCG=n, sizeof(struct page) = 56.
> > >
> > > If CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y, this is mitigated by
> > > the explicit alignment:
> > >
> > >     #ifdef CONFIG_HAVE_ALIGNED_STRUCT_PAGE
> > >     #define _struct_page_alignment  __aligned(2 * sizeof(unsigned long))
> > >     #else
> > >     #define _struct_page_alignment
> > >     #endif
> > >
> > >     struct page { ... } _struct_page_alignment;
> > >
> > > However, HAVE_ALIGNED_STRUCT_PAGE is selected only if SLUB,
> > > while my .config is using SLAB.
> > >
> >
> > Thanks for the report. I will look into this.
> 
> OK, so we can obviously fix this easily by setting
> CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y unconditionally instead of only 'if
> SLUB'. The question is whether that is likely to lead to any
> performance regressions.

I'm not sure I understand. The mem_init() bug triggers if sizeof(struct
page) is not a power of 2. HAVE_ALIGNED_STRUCT_PAGE forces the page
alignment to 16 bytes but, for example, a 48-byte structure is not a
power of 2.

-- 
Catalin



More information about the linux-arm-kernel mailing list