[PATCH v2 3/4] arm64: mm: make vmemmap region a projection of the linear region

Ard Biesheuvel ardb at kernel.org
Tue Nov 10 11:18:59 EST 2020


On Tue, 10 Nov 2020 at 17:14, Catalin Marinas <catalin.marinas at arm.com> wrote:
>
> On Tue, Nov 10, 2020 at 04:42:53PM +0100, Ard Biesheuvel wrote:
> > On Tue, 10 Nov 2020 at 16:39, Catalin Marinas <catalin.marinas at arm.com> wrote:
> > > On Tue, Nov 10, 2020 at 03:08:45PM +0100, Ard Biesheuvel wrote:
> > > > On Tue, 10 Nov 2020 at 14:10, Ard Biesheuvel <ardb at kernel.org> wrote:
> > > > > On Tue, 10 Nov 2020 at 13:55, Geert Uytterhoeven <geert at linux-m68k.org> wrote:
> > > > > > On Thu, Oct 8, 2020 at 5:43 PM Ard Biesheuvel <ardb at kernel.org> wrote:
> > > > > > > --- a/arch/arm64/mm/init.c
> > > > > > > +++ b/arch/arm64/mm/init.c
> > > > > > > @@ -504,6 +504,8 @@ static void __init free_unused_memmap(void)
> > > > > > >   */
> > > > > > >  void __init mem_init(void)
> > > > > > >  {
> > > > > > > +       BUILD_BUG_ON(!is_power_of_2(sizeof(struct page)));
> > > > > >
> > > > > > This check is triggering for me.
> > > > > >
> > > > > > If CONFIG_MEMCG=n, sizeof(struct page) = 56.
> > > > > >
> > > > > > If CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y, this is mitigated by
> > > > > > the explicit alignment:
> > > > > >
> > > > > >     #ifdef CONFIG_HAVE_ALIGNED_STRUCT_PAGE
> > > > > >     #define _struct_page_alignment  __aligned(2 * sizeof(unsigned long))
> > > > > >     #else
> > > > > >     #define _struct_page_alignment
> > > > > >     #endif
> > > > > >
> > > > > >     struct page { ... } _struct_page_alignment;
> > > > > >
> > > > > > However, HAVE_ALIGNED_STRUCT_PAGE is selected only if SLUB,
> > > > > > while my .config is using SLAB.
> > > > > >
> > > > >
> > > > > Thanks for the report. I will look into this.
> > > >
> > > > OK, so we can obviously fix this easily by setting
> > > > CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y unconditionally instead of only 'if
> > > > SLUB'. The question is whether that is likely to lead to any
> > > > performance regressions.
> > >
> > > I'm not sure I understand. The mem_init() bug triggers if sizeof(struct
> > > page) is not a power of 2. HAVE_ALIGNED_STRUCT_PAGE forces the page
> > > alignment to 16 bytes but, for example, a 48-byte structure is not a
> > > power of 2.
> >
> > True, but looking at include/linux/mm_types.h, I don't see how that
> > would happen.
>
> Not with 48 and probably won't ever go beyond 64 for "production"
> builds. But say someone wants to experiment with some debug data in
> struct page and adds a long. The structure (I think 64 now with MEMCG=y)
> becomes 72, force-aligned to 80. That triggers the build-bug.
>
> Anyway, I don't mind the forced alignment, only that the build-bug you
> added has a different requirement than what HAVE_ALIGNED_STRUCT_PAGE
> provides (power of 2 vs 16-byte aligned).
>
> AFAICT, VMEMMAP_START is a compile-time value, PAGE_OFFSET is also
> compile-time, so can we not revert to the original virt_to_page and
> page_to_virt macros? They'd not be as efficient but it may not matter
> much (and if the struct size is a power of 2, the compiler should change
> the division/multiplication by shifts).
>

Works for me. I'll go and code that up.



More information about the linux-arm-kernel mailing list