[v4 PATCH] arm64: mm: force write fault for atomic RMW instructions
Catalin Marinas
catalin.marinas at arm.com
Fri Jun 14 05:20:30 PDT 2024
On Wed, Jun 05, 2024 at 01:37:23PM -0700, Yang Shi wrote:
> +static __always_inline bool aarch64_insn_is_class_cas(u32 insn)
> +{
> + return aarch64_insn_is_cas(insn) ||
> + aarch64_insn_is_casp(insn);
> +}
> +
> +/*
> + * Exclude unallocated atomic instructions and LD64B/LDAPR.
> + * The masks and values were generated by using Python sympy module.
> + */
> +static __always_inline bool aarch64_atomic_insn_has_wr_perm(u32 insn)
> +{
> + return ((insn & 0x3f207c00) == 0x38200000) ||
> + ((insn & 0x3f208c00) == 0x38200000) ||
> + ((insn & 0x7fe06c00) == 0x78202000) ||
> + ((insn & 0xbf204c00) == 0x38200000);
> +}
This is still pretty opaque if we want to modify it in the future. I
guess we could add more tests on top but it would be nice to have a way
to re-generate these masks. I'll think about, for now these tests would
do.
> @@ -511,6 +539,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
> unsigned long addr = untagged_addr(far);
> struct vm_area_struct *vma;
> int si_code;
> + bool may_force_write = false;
>
> if (kprobe_page_fault(regs, esr))
> return 0;
> @@ -547,6 +576,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
> /* If EPAN is absent then exec implies read */
> if (!alternative_has_cap_unlikely(ARM64_HAS_EPAN))
> vm_flags |= VM_EXEC;
> + may_force_write = true;
> }
>
> if (is_ttbr0_addr(addr) && is_el1_permission_fault(addr, esr, regs)) {
> @@ -568,6 +598,12 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
> if (!vma)
> goto lock_mmap;
>
> + if (may_force_write && (vma->vm_flags & VM_WRITE) &&
> + is_el0_atomic_instr(regs)) {
> + vm_flags = VM_WRITE;
> + mm_flags |= FAULT_FLAG_WRITE;
> + }
I think we can get rid of may_force_write and just test (vm_flags &
VM_READ).
--
Catalin
More information about the linux-arm-kernel
mailing list