[PATCH 4/4] arm64: ftrace: Add return address protection

Mark Rutland mark.rutland at arm.com
Wed Nov 30 06:04:07 PST 2022


Hi Ard,

On Tue, Nov 29, 2022 at 03:18:03PM +0100, Ard Biesheuvel wrote:
> Use the newly added asm macros to protect and restore the return address
> in the ftrace call wrappers, based on whichever method is active (PAC
> and/or shadow call stack).
> 
> If the graph tracer is in use, this covers both the return address *to*
> the ftrace call site as well as the return address *at* the call site,
> and the latter will either be restored in return_to_handler(), or before
> returning to the call site.
> 
> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> ---
>  arch/arm64/kernel/entry-ftrace.S | 28 +++++++++++++++++++-
>  1 file changed, 27 insertions(+), 1 deletion(-)

As a heads-up, this code has recently changed quite significantly, and this
won't apply to the version queued in arm64's for-next/{ftrace,core} branches.

I had a direction of travel in mind with some changes for better stacktracing,
which won't work with the approach here, so I'd prefer we do this a bit
differently; more on that below.

> diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S
> index 795344ab4ec45889..c744e4dd8c90a352 100644
> --- a/arch/arm64/kernel/entry-ftrace.S
> +++ b/arch/arm64/kernel/entry-ftrace.S
> @@ -35,6 +35,11 @@
>   * is missing from the LR and existing chain of frame records.
>   */
>  	.macro  ftrace_regs_entry, allregs=0
> +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
> +	protect_return_address x9
> +#endif
> +	protect_return_address x30

I think if we're going to protect the callsite's original LR (x9 here), we
should do that regardless of CONFIG_FUNCTION_GRAPH_TRACER; what matters is
whether that's vulnerable rather than whether we intend to modify it, so I
don't think it makes sene to protect it conditionally based on
CONFIG_FUNCTION_GRAPH_TRACER.

I'm a bit worried this might confuse some ftrace code manipulating the return
address (e.g. manipulation of the ftrace graph return stack), as I don't think
that's all PAC-clean, and might need some modification.

> +
>  	/* Make room for pt_regs, plus a callee frame */
>  	sub	sp, sp, #(PT_REGS_SIZE + 16)
>  
> @@ -89,7 +94,9 @@ SYM_CODE_START(ftrace_caller)
>  	b	ftrace_common
>  SYM_CODE_END(ftrace_caller)
>  
> -SYM_CODE_START(ftrace_common)
> +SYM_CODE_START_LOCAL(ftrace_common)
> +	alternative_insn  nop, "xpaci x30", ARM64_HAS_ADDRESS_AUTH, IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL)
> +
>  	sub	x0, x30, #AARCH64_INSN_SIZE	// ip (callsite's BL insn)
>  	mov	x1, x9				// parent_ip (callsite's LR)
>  	ldr_l	x2, function_trace_op		// op
> @@ -115,9 +122,27 @@ SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
>  	ldr	x30, [sp, #S_LR]
>  	ldr	x9, [sp, #S_PC]
>  
> +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
> +	/* grab the original return address from the stack */
> +	ldr	x10, [sp, #PT_REGS_SIZE + 8]
> +#endif

I'm planning to teach the stack unwinder how to unwind through ftrace_regs,
such that we wouldn't need to duplicate the LR in a frame record here, and so
we'd *only* have the copy inside the struct ftrace_regs.

I think we don't need the copy here if we sign the callsite's LR against the
base of the struct ftrace_regs. That way ftrace_graph_func() can sign the
updated return address, and this code wouldn't need to care. The ftrace_regs
have a copy of x18 that we can use to manipulate the SCS.

> +
>  	/* Restore the callsite's SP */
>  	add	sp, sp, #PT_REGS_SIZE + 16
>  
> +	restore_return_address x9
> +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
> +	/* compare the original return address with the actual one */
> +	cmp	x10, x30
> +	b.ne	0f
> +
> +	/*
> +	 * If they are the same, unprotect it now. If it was modified, it will
> +	 * be dealt with in return_to_handler() below.
> +	 */
> +	restore_return_address x30
> +0:
> +#endif
>  	ret	x9

This means if the return address is clobbered, we'll blindly trust it without
authentication, which IMO undermines the point of signing it in the first
place.

As above, I'd prefer that we had ftrace_graph_func() fix things up so that we
can unconditionally authenticate things here, which would be a bit stronger and
simpler to reason about.

Thanks,
Mark.

>  SYM_CODE_END(ftrace_common)
>  
> @@ -329,6 +354,7 @@ SYM_CODE_START(return_to_handler)
>  	ldp x6, x7, [sp, #48]
>  	add sp, sp, #64
>  
> +	restore_return_address x30
>  	ret
>  SYM_CODE_END(return_to_handler)
>  #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
> -- 
> 2.35.1
> 



More information about the linux-arm-kernel mailing list