[v2 PATCH] arm64: mm: force write fault for atomic RMW instructions

Christoph Lameter (Ampere) cl at gentwo.org
Thu May 23 11:09:11 PDT 2024


On Thu, 23 May 2024, Catalin Marinas wrote:

> On Mon, May 20, 2024 at 09:56:36AM -0700, Yang Shi wrote:
>> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
>> index db1aeacd4cd9..1cc73664fc55 100644
>> --- a/arch/arm64/include/asm/insn.h
>> +++ b/arch/arm64/include/asm/insn.h
>> @@ -319,6 +319,7 @@ static __always_inline u32 aarch64_insn_get_##abbr##_value(void)	\
>>   * "-" means "don't care"
>>   */
>>  __AARCH64_INSN_FUNCS(class_branch_sys,	0x1c000000, 0x14000000)
>> +__AARCH64_INSN_FUNCS(class_atomic,	0x3b200c00, 0x38200000)
>
> While this class includes all atomics that currently require write
> permission, there's some unallocated space in this range and we don't
> know what future architecture versions may introduce. Unfortunately we
> need to check each individual atomic op in this class (not sure what the
> overhead will be).

Can you tell us which bits or pattern is not allocated? Maybe we can 
exclude that from the pattern.




More information about the linux-arm-kernel mailing list