[PATCH 1/3] arm64: gcs: Do not set PTE_SHARED on GCS mappings if FEAT_LPA2 is enabled
Catalin Marinas
catalin.marinas at arm.com
Fri Feb 20 08:45:11 PST 2026
On Fri, Feb 20, 2026 at 04:56:26PM +0100, David Hildenbrand wrote:
> On 2/20/26 15:05, Catalin Marinas wrote:
> > When FEAT_LPA2 is enabled, bits 8-9 of the PTE replace the
> > shareability attribute with bits 50-51 of the output address. The
> > _PAGE_GCS{,_RO} definitions include the PTE_SHARED bits as 0b11 and they
> > match the other user _PAGE_* prot macros.
>
> I assume that comes from _PAGE_DEFAULT -> _PROT_DEFAULT
Yes.
> > However, the difference is
> > that all the classic prot values are accessed via protection_map[] and
> > have the PTE_SHARED bits removed when LPA2 is enabled.
> >
> > Ensure that PAGE_GCS{,RO} use the dynamic PTE_MAYBE_SHARED instead of
> > the static PTE_SHARED.
>
> I expected here a quick description of the symptom: "Leaving PTE_SHARED set
> results in kernel panics." etc. :)
Ah, yes, I forgot to give the details of the fault - a lot worse with
THP, unhandled page fault, or bad page warning with small pages. I'll
respin with some better comment.
> > diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
> > index 161e8660eddd..a65f2c50e9ca 100644
> > --- a/arch/arm64/include/asm/pgtable-prot.h
> > +++ b/arch/arm64/include/asm/pgtable-prot.h
> > @@ -164,8 +164,8 @@ static inline bool __pure lpa2_is_enabled(void)
> > #define _PAGE_GCS (_PAGE_DEFAULT | PTE_NG | PTE_UXN | PTE_WRITE | PTE_USER)
> > #define _PAGE_GCS_RO (_PAGE_DEFAULT | PTE_NG | PTE_UXN | PTE_USER)
> > -#define PAGE_GCS __pgprot(_PAGE_GCS)
> > -#define PAGE_GCS_RO __pgprot(_PAGE_GCS_RO)
> > +#define PAGE_GCS __pgprot((_PAGE_GCS & ~PTE_SHARED) | PTE_MAYBE_SHARED)
> > +#define PAGE_GCS_RO __pgprot((_PAGE_GCS_RO & ~PTE_SHARED) | PTE_MAYBE_SHARED)
> > #define PIE_E0 ( \
> > PIRx_ELx_PERM_PREP(pte_pi_index(_PAGE_GCS), PIE_GCS) | \
> > diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
> > index 08ee177432c2..2e404441063b 100644
> > --- a/arch/arm64/mm/mmap.c
> > +++ b/arch/arm64/mm/mmap.c
> > @@ -87,7 +87,7 @@ pgprot_t vm_get_page_prot(vm_flags_t vm_flags)
> > /* Short circuit GCS to avoid bloating the table. */
> > if (system_supports_gcs() && (vm_flags & VM_SHADOW_STACK)) {
> > - prot = _PAGE_GCS_RO;
> > + prot = pgprot_val(PAGE_GCS_RO);
> > } else {
> > prot = pgprot_val(protection_map[vm_flags &
> > (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]);
>
> The only confusion I have is why we don't update _PAGE_GCS/_PAGE_GCS_RO,
> consequently leaving PTE_SHARED set for the other users of
> _PAGE_GCS/_PAGE_GCS_RO in arch/arm64/include/asm/pgtable-prot.h.
>
> Staring at pte_pi_index() (and the definitions of PTE_PI_IDX_0), I assume it
> doesn't matter.
>
> Just curious why we don't fixup _PAGE_GCS / _PAGE_GCS_RO instead.
_PAGE_GCS needs to be constant as it ends up in asm, so we can't add
the dynamic PTE_MAYBE_SHARED. There are other ways to solve this but it
is somewhat more consistent with the other _PAGE_* definitions which all
have PTE_SHARED.
Well, that's for a quick fix that can be easily backported. We could
overhaul these macros to make them clearer.
--
Catalin
More information about the linux-arm-kernel
mailing list