[PATCH v6 2/2] arm64: implement support for static call trampolines
Peter Zijlstra
peterz at infradead.org
Mon Nov 8 03:52:31 PST 2021
On Mon, Nov 08, 2021 at 12:29:04PM +0100, Ard Biesheuvel wrote:
> On Mon, 8 Nov 2021 at 11:23, Peter Zijlstra <peterz at infradead.org> wrote:
> > > +static void *strip_cfi_jt(void *addr)
> > > +{
> > > + if (IS_ENABLED(CONFIG_CFI_CLANG)) {
> > > + void *p = addr;
> > > + u32 insn;
> > > +
> > > + /*
> > > + * Taking the address of a function produces the address of the
> > > + * jump table entry when Clang CFI is enabled. Such entries are
> > > + * ordinary jump instructions, preceded by a BTI C instruction
> > > + * if BTI is enabled for the kernel.
> > > + */
> > > + if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL))
> > > + p += 4;
> >
> > Perhaps:
> > if (aarch64_insn_is_bti(le32_to_cpup(p)))
>
> That instruction does not exist yet, and it begs the question which
> type of BTI instruction we want to detect.
Yeah, I actually checked, but I figured the intent was clear enough. I
figured all of them?
> > p += 4;
> >
> > Perhapser still, add:
> > else
> > WARN_ON(IS_ENABLED(CONFIG_ARM64_BTI_KERNEL));
> >
>
> There's already a WARN() below that will trigger and return the
> original address if the entry did not have the expected layout, which
> means a direct branch at offset 0x0 or 0x4 depending on whether BTI is
> on.
>
> So I could add a WARN() here as well, but I'd prefer to keep the one
> at the bottom, which makes the one here slightly redundant.
Sure, that works. The slightly more paranoid me would tell you that the
code as is might match something you didn't want it to.
Eg. without the extra WARN, you could accidentally match a B instruction
without BTI on a BTI kernel build. Or your initial version could even
match:
RET;
B ponies;
on a BTI kernel.
My point being that since we're not exactly sure what a future compiler
will generate for us here, we'd best be maximally paranoid about what
we're willing to accept.
> > > +
> > > + insn = le32_to_cpup(p);
> > > + if (aarch64_insn_is_b(insn))
> > > + return p + aarch64_get_branch_offset(insn);
> > > +
> > > + WARN_ON(1);
> > > + }
> > > + return addr;
> > > +}
> >
> > Also, can this please have a comment decrying the lack of built-in for
> > this?
>
> Sure.
Which ties in with that. Once it's a built-in, we can be sure the
compiler knows what it needs to do to undo it's own magic.
More information about the linux-arm-kernel
mailing list