[PATCH 1/3] arm64: gcs: Do not set PTE_SHARED on GCS mappings if FEAT_LPA2 is enabled

David Hildenbrand (Arm) david at kernel.org
Fri Feb 20 07:56:26 PST 2026


On 2/20/26 15:05, Catalin Marinas wrote:
> When FEAT_LPA2 is enabled, bits 8-9 of the PTE replace the
> shareability attribute with bits 50-51 of the output address. The
> _PAGE_GCS{,_RO} definitions include the PTE_SHARED bits as 0b11 and they
> match the other user _PAGE_* prot macros. 

I assume that comes from _PAGE_DEFAULT -> _PROT_DEFAULT

> However, the difference is
> that all the classic prot values are accessed via protection_map[] and
> have the PTE_SHARED bits removed when LPA2 is enabled.
> 
> Ensure that PAGE_GCS{,RO} use the dynamic PTE_MAYBE_SHARED instead of
> the static PTE_SHARED.

I expected here a quick description of the symptom: "Leaving PTE_SHARED 
set results in kernel panics." etc. :)

> 
> Signed-off-by: Catalin Marinas <catalin.marinas at arm.com>
> Fixes: 6497b66ba694 ("arm64/mm: Map pages for guarded control stack")
> Reported-by: Emanuele Rocca <emanuele.rocca at arm.com>
> Cc: <stable at vger.kernel.org>
> Cc: Mark Brown <broonie at kernel.org>
> Cc: Will Deacon <will at kernel.org>
> ---
>   arch/arm64/include/asm/pgtable-prot.h | 4 ++--
>   arch/arm64/mm/mmap.c                  | 2 +-
>   2 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
> index 161e8660eddd..a65f2c50e9ca 100644
> --- a/arch/arm64/include/asm/pgtable-prot.h
> +++ b/arch/arm64/include/asm/pgtable-prot.h
> @@ -164,8 +164,8 @@ static inline bool __pure lpa2_is_enabled(void)
>   #define _PAGE_GCS	(_PAGE_DEFAULT | PTE_NG | PTE_UXN | PTE_WRITE | PTE_USER)
>   #define _PAGE_GCS_RO	(_PAGE_DEFAULT | PTE_NG | PTE_UXN | PTE_USER)
>   
> -#define PAGE_GCS	__pgprot(_PAGE_GCS)
> -#define PAGE_GCS_RO	__pgprot(_PAGE_GCS_RO)
> +#define PAGE_GCS	__pgprot((_PAGE_GCS & ~PTE_SHARED) | PTE_MAYBE_SHARED)
> +#define PAGE_GCS_RO	__pgprot((_PAGE_GCS_RO & ~PTE_SHARED) | PTE_MAYBE_SHARED)
>   
>   #define PIE_E0	( \
>   	PIRx_ELx_PERM_PREP(pte_pi_index(_PAGE_GCS),           PIE_GCS)  | \
> diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
> index 08ee177432c2..2e404441063b 100644
> --- a/arch/arm64/mm/mmap.c
> +++ b/arch/arm64/mm/mmap.c
> @@ -87,7 +87,7 @@ pgprot_t vm_get_page_prot(vm_flags_t vm_flags)
>   
>   	/* Short circuit GCS to avoid bloating the table. */
>   	if (system_supports_gcs() && (vm_flags & VM_SHADOW_STACK)) {
> -		prot = _PAGE_GCS_RO;
> +		prot = pgprot_val(PAGE_GCS_RO);
>   	} else {
>   		prot = pgprot_val(protection_map[vm_flags &
>   				   (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]);

The only confusion I have is why we don't update _PAGE_GCS/_PAGE_GCS_RO, 
consequently leaving PTE_SHARED set for the other users of 
_PAGE_GCS/_PAGE_GCS_RO in arch/arm64/include/asm/pgtable-prot.h.

Staring at pte_pi_index() (and the definitions of PTE_PI_IDX_0), I 
assume it doesn't matter.

Just curious why we don't fixup _PAGE_GCS / _PAGE_GCS_RO instead.

Sorry for the probably stupid questions, still learning all these arch 
details :)

-- 
Cheers,

David



More information about the linux-arm-kernel mailing list