[PATCH] mm/huge_memory: Initialise the tags of the huge zero
Lance Yang
ioworker0 at gmail.com
Tue Nov 4 03:53:53 PST 2025
From: Lance Yang <lance.yang at linux.dev>
On Mon, 3 Nov 2025 14:30:12 +0000, Catalin Marinas wrote:
> On Mon, Nov 03, 2025 at 01:32:42PM +0000, Mark Brown wrote:
> > On Fri, Oct 31, 2025 at 04:57:50PM +0000, Catalin Marinas wrote:
> >
> > > On arm64 with MTE enabled, a page mapped as Normal Tagged (PROT_MTE) in
> > > user space will need to have its allocation tags initialised. This is
> > > normally done in the arm64 set_pte_at() after checking the memory
> > > attributes. Such page is also marked with the PG_mte_tagged flag to
> > > avoid subsequent clearing. Since this relies on having a struct page,
> > > pte_special() mappings are ignored.
> >
> > We are seeing breakage in userspace on a range of arm64 platforms which
> > bisects to this commit in -next. We see traces like:
> >
> > [ 59.746701] Internal error: Oops - Undefined instruction: 0000000002000000 [#1] SMP
> >
> > ...
> >
> > [ 59.819007] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> > [ 59.826055] pc : mte_zero_clear_page_tags+0x1c/0x40
> > [ 59.830980] lr : tag_clear_highpage+0x68/0x118
> >
> > ...
> >
> > [ 59.911874] Call trace:
> > [ 59.914333] mte_zero_clear_page_tags+0x1c/0x40 (P)
> > [ 59.919278] get_page_from_freelist+0x1a60/0x1c80
> > [ 59.924042] __alloc_frozen_pages_noprof+0x178/0xd20
> > [ 59.929068] alloc_pages_mpol+0xb4/0x1a4
> > [ 59.933022] alloc_frozen_pages_noprof+0x48/0xc0
> > [ 59.937683] folio_alloc_noprof+0x14/0x68
> > [ 59.941718] mm_get_huge_zero_folio+0xf4/0x30c
> > [ 59.946200] do_huge_pmd_anonymous_page+0x278/0x6a0
> > [ 59.951119] __handle_mm_fault+0x700/0x1834
> > [ 59.955332] handle_mm_fault+0x8c/0x2a0
> > [ 59.959190] do_page_fault+0x108/0x75c
> > [ 59.962964] do_translation_fault+0x5c/0x6c
> > [ 59.967181] do_mem_abort+0x40/0x90
>
> Thanks for the report. I missed the fact that the arch
> mte_zero_clear_page_tags() arch code issues MTE instructions
> irrespective of whether the hardware supports it. We got away with this
> so far since we check the VM_MTE flag and that's only set if the
> hardware supports MTE.
>
> > Looking at the codes:
> >
> > > - zero_folio = folio_alloc((GFP_TRANSHUGE | __GFP_ZERO) & ~__GFP_MOVABLE,
> > > + zero_folio = folio_alloc((GFP_TRANSHUGE | __GFP_ZERO | __GFP_ZEROTAGS) &
> > > + ~__GFP_MOVABLE,
> > > HPAGE_PMD_ORDER);
> >
> > This adds an unonditional __GFP_ZEROTAGS - from a quick scan it looks
> > like this was previously only enabled by vma_alloc_zeroed_movable_folio()
> > when the VMA has VM_MTE, I think we need a similar test here.
>
> We can't do this for the huge zero page since this will be shared by
> other vmas and not all would have VM_MTE set. I'll fix it in the arch
> code:
>
> -----------8<---------------------------------------------
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index d816ff44faff..125dfa6c613b 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -969,6 +969,16 @@ struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
>
> void tag_clear_highpage(struct page *page)
> {
> + /*
> + * Check if MTE is supported and fall back to clear_highpage().
> + * get_huge_zero_folio() unconditionally passes __GFP_ZEROTAGS and
> + * post_alloc_hook() will invoke tag_clear_highpage().
> + */
> + if (!system_supports_mte()) {
> + clear_highpage(page);
> + return;
> + }
> +
> /* Newly allocated page, shouldn't have been tagged yet */
> WARN_ON_ONCE(!try_page_mte_tagging(page));
> mte_zero_clear_page_tags(page_address(page));
> ------------------8<------------------------------------------
>
> Testing now.
>
Good catch! LGTM, feel free to add:
Reviewed-by: Lance Yang <lance.yang at linux.dev>
More information about the linux-arm-kernel
mailing list