[v2 PATCH] arm64: mm: force write fault for atomic RMW instructions

Catalin Marinas catalin.marinas at arm.com
Thu May 23 12:00:05 PDT 2024


On Thu, May 23, 2024 at 11:09:11AM -0700, Christoph Lameter (Ampere) wrote:
> On Thu, 23 May 2024, Catalin Marinas wrote:
> 
> > On Mon, May 20, 2024 at 09:56:36AM -0700, Yang Shi wrote:
> > > diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
> > > index db1aeacd4cd9..1cc73664fc55 100644
> > > --- a/arch/arm64/include/asm/insn.h
> > > +++ b/arch/arm64/include/asm/insn.h
> > > @@ -319,6 +319,7 @@ static __always_inline u32 aarch64_insn_get_##abbr##_value(void)	\
> > >   * "-" means "don't care"
> > >   */
> > >  __AARCH64_INSN_FUNCS(class_branch_sys,	0x1c000000, 0x14000000)
> > > +__AARCH64_INSN_FUNCS(class_atomic,	0x3b200c00, 0x38200000)
> > 
> > While this class includes all atomics that currently require write
> > permission, there's some unallocated space in this range and we don't
> > know what future architecture versions may introduce. Unfortunately we
> > need to check each individual atomic op in this class (not sure what the
> > overhead will be).
> 
> Can you tell us which bits or pattern is not allocated? Maybe we can exclude
> that from the pattern.

Yes, it may be easier to exclude those patterns. See the Arm ARM K.a
section C4.1.94.29 (page 791).

-- 
Catalin



More information about the linux-arm-kernel mailing list