[PATCH 0/4] riscv: uaccess: optimizations

Mark Rutland mark.rutland at arm.com
Mon Jul 8 08:30:52 PDT 2024


On Mon, Jul 08, 2024 at 02:52:43PM +0100, Will Deacon wrote:
> On Fri, Jul 05, 2024 at 10:58:29AM -0700, Linus Torvalds wrote:
> > On Fri, 5 Jul 2024 at 04:25, Will Deacon <will at kernel.org> wrote:
> > So on x86-64, the simple solution is to just say "we know if the top
> > bit is clear, it cannot ever touch kernel code, and if the top bit is
> > set we have to make the address fault". So just duplicating the top
> > bit (with an arithmetic shift) and or'ing it with the low bits, we get
> > exactly what we want.
> > 
> > But my knowledge of arm64 is weak enough that while I am reading
> > assembly language and I know that instead of the top bit, it's bit55,
> > I don't know what the actual rules for the translation table registers
> > are.
> > 
> > If the all-bits-set address is guaranteed to always trap, then arm64
> > could just use the same thing x86 does (just duplicating bit 55
> > instead of the sign bit)?
> 
> Perhaps we could just force accesses with bit 55 set to the address
> '1UL << 55'? That should sit slap bang in the middle of the guard
> region between the TTBRs

Yep, that'll work until we handle FEAT_D128 where (1UL << 55) will be
the start of the TTBR1 range in some configurations.

> and I think it would resolve any issues we may have with wrapping. It
> still means effectively reverting 2305b809be93 ("arm64: uaccess:
> simplify uaccess_mask_ptr()"), though.

If we do bring that back, it'd be nice if we could do that without the
CSEL+CSDB, as the CSDB is liable to be expensive on some parts (e.g.
it's an alias of DSB on older designs).

> Dunno. Mark, Catalin, what do you guys reckon?

I think there's a slight variant of the x86 approach that might work,
I've posted in my reply at

  https://lore.kernel.org/lkml/ZowD3LQT_KTz2g4X@J2N7QTR9R3/

Mark.



More information about the linux-riscv mailing list