[v5 PATCH] arm64: mm: force write fault for atomic RMW instructions

Catalin Marinas catalin.marinas at arm.com
Wed Jul 10 02:22:28 PDT 2024


On Tue, Jul 09, 2024 at 03:29:58PM -0700, Yang Shi wrote:
> On 7/9/24 11:35 AM, Catalin Marinas wrote:
> > On Tue, Jul 09, 2024 at 10:56:55AM -0700, Yang Shi wrote:
> > > On 7/4/24 3:03 AM, Catalin Marinas wrote:
> > > I tested exec-only on QEMU tcg, but I don't have a hardware supported EPAN.
> > > I don't think performance benchmark on QEMU tcg makes sense since it is
> > > quite slow, such small overhead is unlikely measurable on it.
> > 
> > Yeah, benchmarking under qemu is pointless. I think you can remove some
> > of the ARM64_HAS_EPAN checks (or replaced them with ARM64_HAS_PAN) just
> > for testing. For security reason, we removed this behaviour in commit
> > 24cecc377463 ("arm64: Revert support for execute-only user mappings")
> > but it's good enough for testing. This should give you PROT_EXEC-only
> > mappings on your hardware.
> 
> Thanks for the suggestion. IIUC, I still can emulate exec-only even though
> hardware doesn't support EPAN? So it means reading exec-only area in kernel
> still can trigger fault, right?

Yes, it's been supported since ARMv8.0. We limited it to EPAN only since
setting a PROT_EXEC mapping still allowed the kernel to access the
memory even if PSTATE.PAN was set.

> And 24cecc377463 ("arm64: Revert support for execute-only user mappings")
> can't be reverted cleanly by git revert, so I did it manually as below.

Yeah, I wasn't expecting that to work.

> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 6a8b71917e3b..0bdedd415e56 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -573,8 +573,8 @@ static int __kprobes do_page_fault(unsigned long far,
> unsigned long esr,
>                 /* Write implies read */
>                 vm_flags |= VM_WRITE;
>                 /* If EPAN is absent then exec implies read */
> -               if (!alternative_has_cap_unlikely(ARM64_HAS_EPAN))
> -                       vm_flags |= VM_EXEC;
> +               //if (!alternative_has_cap_unlikely(ARM64_HAS_EPAN))
> +               //      vm_flags |= VM_EXEC;
>         }
> 
>         if (is_ttbr0_addr(addr) && is_el1_permission_fault(addr, esr, regs))
> {
> diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
> index 642bdf908b22..d30265d424e4 100644
> --- a/arch/arm64/mm/mmap.c
> +++ b/arch/arm64/mm/mmap.c
> @@ -19,7 +19,7 @@ static pgprot_t protection_map[16] __ro_after_init = {
>         [VM_WRITE]                                      = PAGE_READONLY,
>         [VM_WRITE | VM_READ]                            = PAGE_READONLY,
>         /* PAGE_EXECONLY if Enhanced PAN */
> -       [VM_EXEC]                                       = PAGE_READONLY_EXEC,
> +       [VM_EXEC]                                       = PAGE_EXECONLY,
>         [VM_EXEC | VM_READ]                             = PAGE_READONLY_EXEC,
>         [VM_EXEC | VM_WRITE]                            = PAGE_READONLY_EXEC,
>         [VM_EXEC | VM_WRITE | VM_READ]                  = PAGE_READONLY_EXEC,

In theory you'd need to change the VM_SHARED | VM_EXEC entry as well.
Otherwise it looks fine.

-- 
Catalin



More information about the linux-arm-kernel mailing list