[v4 PATCH] arm64: mm: force write fault for atomic RMW instructions

Yang Shi yang at os.amperecomputing.com
Wed Jun 26 13:50:42 PDT 2024



On 6/26/24 11:45 AM, Yang Shi wrote:
>
>
> On 6/14/24 5:20 AM, Catalin Marinas wrote:
>> On Wed, Jun 05, 2024 at 01:37:23PM -0700, Yang Shi wrote:
>>> +static __always_inline bool aarch64_insn_is_class_cas(u32 insn)
>>> +{
>>> +    return aarch64_insn_is_cas(insn) ||
>>> +           aarch64_insn_is_casp(insn);
>>> +}
>>> +
>>> +/*
>>> + * Exclude unallocated atomic instructions and LD64B/LDAPR.
>>> + * The masks and values were generated by using Python sympy module.
>>> + */
>>> +static __always_inline bool aarch64_atomic_insn_has_wr_perm(u32 insn)
>>> +{
>>> +    return ((insn & 0x3f207c00) == 0x38200000) ||
>>> +           ((insn & 0x3f208c00) == 0x38200000) ||
>>> +           ((insn & 0x7fe06c00) == 0x78202000) ||
>>> +           ((insn & 0xbf204c00) == 0x38200000);
>>> +}
>> This is still pretty opaque if we want to modify it in the future. I
>> guess we could add more tests on top but it would be nice to have a way
>> to re-generate these masks. I'll think about, for now these tests would
>> do.
>
> Sorry for the late reply, just came back from vacation and tried to 
> catch up all the emails and TODOs. We should be able to share the tool 
> used by us to generate the tests. But it may take some time.

D Scott made the tool available publicly. Please refer to 
https://gitlab.com/scott-ph/arm64-insn-group-minimizer

We can re-generate the tests with this tool in the future.

>
>>
>>> @@ -511,6 +539,7 @@ static int __kprobes do_page_fault(unsigned long 
>>> far, unsigned long esr,
>>>       unsigned long addr = untagged_addr(far);
>>>       struct vm_area_struct *vma;
>>>       int si_code;
>>> +    bool may_force_write = false;
>>>         if (kprobe_page_fault(regs, esr))
>>>           return 0;
>>> @@ -547,6 +576,7 @@ static int __kprobes do_page_fault(unsigned long 
>>> far, unsigned long esr,
>>>           /* If EPAN is absent then exec implies read */
>>>           if (!alternative_has_cap_unlikely(ARM64_HAS_EPAN))
>>>               vm_flags |= VM_EXEC;
>>> +        may_force_write = true;
>>>       }
>>>         if (is_ttbr0_addr(addr) && is_el1_permission_fault(addr, 
>>> esr, regs)) {
>>> @@ -568,6 +598,12 @@ static int __kprobes do_page_fault(unsigned 
>>> long far, unsigned long esr,
>>>       if (!vma)
>>>           goto lock_mmap;
>>>   +    if (may_force_write && (vma->vm_flags & VM_WRITE) &&
>>> +        is_el0_atomic_instr(regs)) {
>>> +        vm_flags = VM_WRITE;
>>> +        mm_flags |= FAULT_FLAG_WRITE;
>>> +    }
>> I think we can get rid of may_force_write and just test (vm_flags &
>> VM_READ).
>
> Yes, will fix it in v5.
>>
>




More information about the linux-arm-kernel mailing list