[PATCH] arm64: enable THP_SWAP for arm64

Barry Song 21cnbao at gmail.com
Wed May 25 04:10:41 PDT 2022


On Wed, May 25, 2022 at 7:14 AM Catalin Marinas <catalin.marinas at arm.com> wrote:
>
> On Tue, May 24, 2022 at 10:05:35PM +1200, Barry Song wrote:
> > On Tue, May 24, 2022 at 8:12 PM Catalin Marinas <catalin.marinas at arm.com> wrote:
> > > On Tue, May 24, 2022 at 07:14:03PM +1200, Barry Song wrote:
> > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > > index d550f5acfaf3..8e3771c56fbf 100644
> > > > --- a/arch/arm64/Kconfig
> > > > +++ b/arch/arm64/Kconfig
> > > > @@ -98,6 +98,7 @@ config ARM64
> > > >       select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36)
> > > >       select ARCH_WANT_LD_ORPHAN_WARN
> > > >       select ARCH_WANTS_NO_INSTR
> > > > +     select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES
> > >
> > > I'm not opposed to this but I think it would break pages mapped with
> > > PROT_MTE. We have an assumption in mte_sync_tags() that compound pages
> > > are not swapped out (or in). With MTE, we store the tags in a slab
> >
> > I assume you mean mte_sync_tags() require that THP is not swapped as a whole,
> > as without THP_SWP, THP is still swapping after being splitted. MTE doesn't stop
> > THP from swapping through a couple of splitted pages, does it?
>
> That's correct, split THP page are swapped out/in just fine.
>
> > > object (128-bytes per swapped page) and restore them when pages are
> > > swapped in. At some point we may teach the core swap code about such
> > > metadata but in the meantime that was the easiest way.
> >
> > If my previous assumption is true,  the easiest way to enable THP_SWP
> > for this moment might be always letting mm fallback to the splitting
> > way for MTE hardware. For this moment, I care about THP_SWP more as
> > none of my hardware has MTE.
> >
> > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> > index 45c358538f13..d55a2a3e41a9 100644
> > --- a/arch/arm64/include/asm/pgtable.h
> > +++ b/arch/arm64/include/asm/pgtable.h
> > @@ -44,6 +44,8 @@
> >         __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1)
> >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> >
> > +#define arch_thp_swp_supported !system_supports_mte
> > +
> >  /*
> >   * Outside of a few very special situations (e.g. hibernation), we always
> >   * use broadcast TLB invalidation instructions, therefore a spurious page
> > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > index 2999190adc22..064b6b03df9e 100644
> > --- a/include/linux/huge_mm.h
> > +++ b/include/linux/huge_mm.h
> > @@ -447,4 +447,16 @@ static inline int split_folio_to_list(struct folio *folio,
> >         return split_huge_page_to_list(&folio->page, list);
> >  }
> >
> > +/*
> > + * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to
> > + * limitations in the implementation like arm64 MTE can override this to
> > + * false
> > + */
> > +#ifndef arch_thp_swp_supported
> > +static inline bool arch_thp_swp_supported(void)
> > +{
> > +       return true;
> > +}
> > +#endif
> > +
> >  #endif /* _LINUX_HUGE_MM_H */
> > diff --git a/mm/swap_slots.c b/mm/swap_slots.c
> > index 2b5531840583..dde685836328 100644
> > --- a/mm/swap_slots.c
> > +++ b/mm/swap_slots.c
> > @@ -309,7 +309,7 @@ swp_entry_t get_swap_page(struct page *page)
> >         entry.val = 0;
> >
> >         if (PageTransHuge(page)) {
> > -               if (IS_ENABLED(CONFIG_THP_SWAP))
> > +               if (IS_ENABLED(CONFIG_THP_SWAP) && arch_thp_swp_supported())
> >                         get_swap_pages(1, &entry, HPAGE_PMD_NR);
> >                 goto out;
>
> I think this should work and with your other proposal it would be
> limited to MTE pages:
>
> #define arch_thp_swp_supported(page)    (!test_bit(PG_mte_tagged, &page->flags))
>
> Are THP pages loaded from swap as a whole or are they split? IIRC the

i can confirm thp is written as a whole through:
[   90.622863]  __swap_writepage+0xe8/0x580
[   90.622881]  swap_writepage+0x44/0xf8
[   90.622891]  pageout+0xe0/0x2a8
[   90.622906]  shrink_page_list+0x9dc/0xde0
[   90.622917]  shrink_inactive_list+0x1ec/0x3c8
[   90.622928]  shrink_lruvec+0x3dc/0x628
[   90.622939]  shrink_node+0x37c/0x6a0
[   90.622950]  balance_pgdat+0x354/0x668
[   90.622961]  kswapd+0x1e0/0x3c0
[   90.622972]  kthread+0x110/0x120

but i have never got a backtrace in which thp is loaded as a whole though it
seems the code has this path:
int swap_readpage(struct page *page, bool synchronous)
{
        ...
        bio = bio_alloc(sis->bdev, 1, REQ_OP_READ, GFP_KERNEL);
        bio->bi_iter.bi_sector = swap_page_sector(page);
        bio->bi_end_io = end_swap_bio_read;
        bio_add_page(bio, page, thp_size(page), 0);
        ...
        submit_bio(bio);
}


> splitting still happens but after the swapping out finishes. Even if
> they are loaded as 4K pages, we still have the mte_save_tags() that only
> understands small pages currently, so rejecting THP pages is probably
> best.

as anyway i don't have a mte-hardware to do a valid test to go any
further, so i will totally disable thp_swp for hardware having mte for
this moment in patch v2.

>
> --
> Catalin

Thanks
Barry



More information about the linux-arm-kernel mailing list