[PATCH v2 3/4] arm64: mm: make vmemmap region a projection of the linear region

Ard Biesheuvel ardb at kernel.org
Tue Nov 10 08:10:07 EST 2020


On Tue, 10 Nov 2020 at 13:55, Geert Uytterhoeven <geert at linux-m68k.org> wrote:
>
> Hi Ard,
>
> On Thu, Oct 8, 2020 at 5:43 PM Ard Biesheuvel <ardb at kernel.org> wrote:
> > Now that we have reverted the introduction of the vmemmap struct page
> > pointer and the separate physvirt_offset, we can simplify things further,
> > and place the vmemmap region in the VA space in such a way that virtual
> > to page translations and vice versa can be implemented using a single
> > arithmetic shift.
> >
> > One happy coincidence resulting from this is that the 48-bit/4k and
> > 52-bit/64k configurations (which are assumed to be the two most
> > prevalent) end up with the same placement of the vmemmap region. In
> > a subsequent patch, we will take advantage of this, and unify the
> > memory maps even more.
> >
> > Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
>
> This is now commit 8c96400d6a39be76 ("arm64: mm: make vmemmap region a
> projection of the linear region") in arm64/for-next/core.
>
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -504,6 +504,8 @@ static void __init free_unused_memmap(void)
> >   */
> >  void __init mem_init(void)
> >  {
> > +       BUILD_BUG_ON(!is_power_of_2(sizeof(struct page)));
>
> This check is triggering for me.
>
> If CONFIG_MEMCG=n, sizeof(struct page) = 56.
>
> If CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y, this is mitigated by
> the explicit alignment:
>
>     #ifdef CONFIG_HAVE_ALIGNED_STRUCT_PAGE
>     #define _struct_page_alignment  __aligned(2 * sizeof(unsigned long))
>     #else
>     #define _struct_page_alignment
>     #endif
>
>     struct page { ... } _struct_page_alignment;
>
> However, HAVE_ALIGNED_STRUCT_PAGE is selected only if SLUB,
> while my .config is using SLAB.
>

Thanks for the report. I will look into this.



More information about the linux-arm-kernel mailing list