[PATCH v4 13/29] arm64: convert protection key into vm_flags and pgprot values
Dave Martin
Dave.Martin at arm.com
Thu Jul 25 08:49:50 PDT 2024
On Fri, May 03, 2024 at 02:01:31PM +0100, Joey Gouly wrote:
> Modify arch_calc_vm_prot_bits() and vm_get_page_prot() such that the pkey
> value is set in the vm_flags and then into the pgprot value.
>
> Signed-off-by: Joey Gouly <joey.gouly at arm.com>
> Cc: Catalin Marinas <catalin.marinas at arm.com>
> Cc: Will Deacon <will at kernel.org>
> ---
> arch/arm64/include/asm/mman.h | 8 +++++++-
> arch/arm64/mm/mmap.c | 9 +++++++++
> 2 files changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h
> index 5966ee4a6154..ecb2d18dc4d7 100644
> --- a/arch/arm64/include/asm/mman.h
> +++ b/arch/arm64/include/asm/mman.h
> @@ -7,7 +7,7 @@
> #include <uapi/asm/mman.h>
>
> static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
> - unsigned long pkey __always_unused)
> + unsigned long pkey)
> {
> unsigned long ret = 0;
>
> @@ -17,6 +17,12 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
> if (system_supports_mte() && (prot & PROT_MTE))
> ret |= VM_MTE;
>
> +#if defined(CONFIG_ARCH_HAS_PKEYS)
> + ret |= pkey & 0x1 ? VM_PKEY_BIT0 : 0;
> + ret |= pkey & 0x2 ? VM_PKEY_BIT1 : 0;
> + ret |= pkey & 0x4 ? VM_PKEY_BIT2 : 0;
Out of interest, is this as bad as it looks or does the compiler turn
it into a shift and mask?
> +#endif
> +
> return ret;
> }
> #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
> diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
> index 642bdf908b22..86eda6bc7893 100644
> --- a/arch/arm64/mm/mmap.c
> +++ b/arch/arm64/mm/mmap.c
> @@ -102,6 +102,15 @@ pgprot_t vm_get_page_prot(unsigned long vm_flags)
> if (vm_flags & VM_MTE)
> prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
>
> +#ifdef CONFIG_ARCH_HAS_PKEYS
> + if (vm_flags & VM_PKEY_BIT0)
> + prot |= PTE_PO_IDX_0;
> + if (vm_flags & VM_PKEY_BIT1)
> + prot |= PTE_PO_IDX_1;
> + if (vm_flags & VM_PKEY_BIT2)
> + prot |= PTE_PO_IDX_2;
> +#endif
> +
Ditto. At least we only have three bits to cope with either way.
I'm guessing that these functions are not super-hot path.
[...]
Cheers
---Dave
More information about the linux-arm-kernel
mailing list