[PATCHv6 06/17] LoongArch/mm: Align vmemmap to maximal folio size

Kiryl Shutsemau kas at kernel.org
Thu Feb 5 05:43:58 PST 2026


On Thu, Feb 05, 2026 at 01:56:36PM +0100, David Hildenbrand (Arm) wrote:
> On 2/4/26 17:56, David Hildenbrand (arm) wrote:
> > On 2/2/26 16:56, Kiryl Shutsemau wrote:
> > > The upcoming change to the HugeTLB vmemmap optimization (HVO) requires
> > > struct pages of the head page to be naturally aligned with regard to the
> > > folio size.
> > > 
> > > Align vmemmap to MAX_FOLIO_NR_PAGES.
> > > 
> > > Signed-off-by: Kiryl Shutsemau <kas at kernel.org>
> > > ---
> > >   arch/loongarch/include/asm/pgtable.h | 3 ++-
> > >   1 file changed, 2 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/
> > > include/asm/pgtable.h
> > > index c33b3bcb733e..f9416acb9156 100644
> > > --- a/arch/loongarch/include/asm/pgtable.h
> > > +++ b/arch/loongarch/include/asm/pgtable.h
> > > @@ -113,7 +113,8 @@ extern unsigned long empty_zero_page[PAGE_SIZE /
> > > sizeof(unsigned long)];
> > >        min(PTRS_PER_PGD * PTRS_PER_PUD * PTRS_PER_PMD * PTRS_PER_PTE
> > > * PAGE_SIZE, (1UL << cpu_vabits) / 2) - PMD_SIZE - VMEMMAP_SIZE -
> > > KFENCE_AREA_SIZE)
> > >   #endif
> > > -#define vmemmap        ((struct page *)((VMALLOC_END + PMD_SIZE) &
> > > PMD_MASK))
> > > +#define VMEMMAP_ALIGN    max(PMD_SIZE, MAX_FOLIO_NR_PAGES *
> > > sizeof(struct page))
> > > +#define vmemmap        ((struct page *)(ALIGN(VMALLOC_END,
> > > VMEMMAP_ALIGN)))
> > 
> > 
> > Same comment, the "MAX_FOLIO_NR_PAGES * sizeof(struct page)" is just
> > black magic here
> > and the description of the situation is wrong.
> > 
> > Maybe you want to pull the magic "MAX_FOLIO_NR_PAGES * sizeof(struct
> > page)" into the core and call it
> > 
> > #define MAX_FOLIO_VMEMMAP_ALIGN    (MAX_FOLIO_NR_PAGES * sizeof(struct
> > page))
> > 
> > But then special case it base on (a) HVO being configured in an (b) HVO
> > being possible
> > 
> > #ifdef HUGETLB_PAGE_OPTIMIZE_VMEMMAP && is_power_of_2(sizeof(struct page)
> > /* A very helpful comment explaining the situation. */
> > #define MAX_FOLIO_VMEMMAP_ALIGN    (MAX_FOLIO_NR_PAGES * sizeof(struct
> > page))
> > #else
> > #define MAX_FOLIO_VMEMMAP_ALIGN    0
> > #endif
> > 
> > Something like that.
> > 
> 
> Thinking about this ...
> 
> the vmemmap start is always struct-page-aligned. Otherwise we'd be in
> trouble already.
> 
> Isn't it then sufficient to just align the start to MAX_FOLIO_NR_PAGES?
> 
> Let's assume sizeof(struct page) == 64 and MAX_FOLIO_NR_PAGES = 512 for
> simplicity.
> 
> vmemmap start would be multiples of 512 (0x0010000000).
> 
> 512, 1024, 1536, 2048 ...
> 
> Assume we have an 256-pages folio at 1536+256 = 0x111000000

s/0x/0b/, but okay.

> Assume we have the last page of that folio (0x011111111111), we would just
> get to the start of that folio by AND-ing with ~(256-1).
> 
> Which case am I ignoring?

IIUC, you are ignoring the actual size of struct page. It is not 1 byte :P

The last page of this 256-page folio is at 1536+256 + (64 * 255) which
is 0b100011011000000. There's no mask that you can AND that gets you to
0b11100000000.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov



More information about the linux-riscv mailing list