[PATCH] arm64: enable THP_SWAP for arm64
Barry Song
21cnbao at gmail.com
Tue May 24 04:15:20 PDT 2022
On Tue, May 24, 2022 at 10:05 PM Barry Song <21cnbao at gmail.com> wrote:
>
> On Tue, May 24, 2022 at 8:12 PM Catalin Marinas <catalin.marinas at arm.com> wrote:
> >
> > On Tue, May 24, 2022 at 07:14:03PM +1200, Barry Song wrote:
> > > From: Barry Song <v-songbaohua at oppo.com>
> > >
> > > THP_SWAP has been proved to improve the swap throughput significantly
> > > on x86_64 according to commit bd4c82c22c367e ("mm, THP, swap: delay
> > > splitting THP after swapped out").
> > > As long as arm64 uses 4K page size, it is quite similar with x86_64
> > > by having 2MB PMD THP. So we are going to get similar improvement.
> > > For other page sizes such as 16KB and 64KB, PMD might be too large.
> > > Negative side effects such as IO latency might be a problem. Thus,
> > > we can only safely enable the counterpart of X86_64.
> > >
> > > Cc: "Huang, Ying" <ying.huang at intel.com>
> > > Cc: Minchan Kim <minchan at kernel.org>
> > > Cc: Johannes Weiner <hannes at cmpxchg.org>
> > > Cc: Hugh Dickins <hughd at google.com>
> > > Cc: Shaohua Li <shli at kernel.org>
> > > Cc: Rik van Riel <riel at redhat.com>
> > > Cc: Andrea Arcangeli <aarcange at redhat.com>
> > > Signed-off-by: Barry Song <v-songbaohua at oppo.com>
> > > ---
> > > arch/arm64/Kconfig | 1 +
> > > 1 file changed, 1 insertion(+)
> > >
> > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > index d550f5acfaf3..8e3771c56fbf 100644
> > > --- a/arch/arm64/Kconfig
> > > +++ b/arch/arm64/Kconfig
> > > @@ -98,6 +98,7 @@ config ARM64
> > > select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36)
> > > select ARCH_WANT_LD_ORPHAN_WARN
> > > select ARCH_WANTS_NO_INSTR
> > > + select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES
> >
> > I'm not opposed to this but I think it would break pages mapped with
> > PROT_MTE. We have an assumption in mte_sync_tags() that compound pages
> > are not swapped out (or in). With MTE, we store the tags in a slab
>
> I assume you mean mte_sync_tags() require that THP is not swapped as a whole,
> as without THP_SWP, THP is still swapping after being splitted. MTE doesn't stop
> THP from swapping through a couple of splitted pages, does it?
>
> > object (128-bytes per swapped page) and restore them when pages are
> > swapped in. At some point we may teach the core swap code about such
> > metadata but in the meantime that was the easiest way.
> >
>
> If my previous assumption is true, the easiest way to enable THP_SWP
> for this moment
> might be always letting mm fallback to the splitting way for MTE
> hardware. For this
> moment, I care about THP_SWP more as none of my hardware has MTE.
>
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 45c358538f13..d55a2a3e41a9 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -44,6 +44,8 @@
> __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1)
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> +#define arch_thp_swp_supported !system_supports_mte
> +
> /*
> * Outside of a few very special situations (e.g. hibernation), we always
> * use broadcast TLB invalidation instructions, therefore a spurious page
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 2999190adc22..064b6b03df9e 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -447,4 +447,16 @@ static inline int split_folio_to_list(struct folio *folio,
> return split_huge_page_to_list(&folio->page, list);
> }
>
> +/*
> + * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to
> + * limitations in the implementation like arm64 MTE can override this to
> + * false
> + */
> +#ifndef arch_thp_swp_supported
> +static inline bool arch_thp_swp_supported(void)
> +{
> + return true;
> +}
> +#endif
> +
> #endif /* _LINUX_HUGE_MM_H */
> diff --git a/mm/swap_slots.c b/mm/swap_slots.c
> index 2b5531840583..dde685836328 100644
> --- a/mm/swap_slots.c
> +++ b/mm/swap_slots.c
> @@ -309,7 +309,7 @@ swp_entry_t get_swap_page(struct page *page)
> entry.val = 0;
>
> if (PageTransHuge(page)) {
> - if (IS_ENABLED(CONFIG_THP_SWAP))
> + if (IS_ENABLED(CONFIG_THP_SWAP) && arch_thp_swp_supported())
> get_swap_pages(1, &entry, HPAGE_PMD_NR);
> goto out;
> }
>
Am I actually able to go further to only split MTE tagged pages?
For mm core:
+/*
+ * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to
+ * limitations in the implementation like arm64 MTE can override this to
+ * false
+ */
+#ifndef arch_thp_swp_supported
+static inline bool arch_thp_swp_supported(struct page *page)
+{
+ return true;
+}
+#endif
+
For arm64:
+#define arch_thp_swp_supported(page) !test_bit(PG_mte_tagged, &page->flags)
But I don't have MTE hardware to test. So to me, totally disabling THP_SWP
is safer.
thoughts?
> > --
> > Catalin
>
> Thanks
> Barry
More information about the linux-arm-kernel
mailing list