[PATCH 01/13] arm64: lib: __arch_clear_user(): fold fixups into body
Robin Murphy
robin.murphy at arm.com
Wed Oct 13 12:55:31 PDT 2021
On 2021-10-13 12:00, Mark Rutland wrote:
> Like other functions, __arch_clear_user() places its exception fixups in
> the `.fixup` section without any clear association with
> __arch_clear_user() itself. If we backtrace the fixup code, it will be
> symbolized as an offset from the nearest prior symbol, which happens to
> be `__entry_tramp_text_end`. Further, since the PC adjustment for the
> fixup is akin to a direct branch rather than a function call,
> __arch_clear_user() itself will be missing from the backtrace.
>
> This is confusing and hinders debugging. In general this pattern will
> also be problematic for CONFIG_LIVEPATCH, since fixups often return to
> their associated function, but this isn't accurately captured in the
> stacktrace.
>
> To solve these issues for assembly functions, we must move fixups into
> the body of the functions themselves, after the usual fast-path returns.
> This patch does so for __arch_clear_user().
>
> Inline assembly will be dealt with in subsequent patches.
>
> Other than the improved backtracing, there should be no functional
> change as a result of this patch.
Oh, I always assumed the .fixup section might have some special
significance of its own. If not, all the better - modulo one possible
nit below, this is fine by me. Also it's led me to see that
copy_in_user() has finally left us, hooray! Guess I've got no more
excuses to put off that promised usercopy rewrite other than finding the
time now...
> Signed-off-by: Mark Rutland <mark.rutland at arm.com>
> Cc: Ard Biesheuvel <ardb at kernel.org>
> Cc: Catalin Marinas <catalin.marinas at arm.com>
> Cc: James Morse <james.morse at arm.com>
> Cc: Mark Brown <broonie at kernel.org>
> Cc: Robin Murphy <robin.murphy at arm.com>
> Cc: Will Deacon <will at kernel.org>
> ---
> arch/arm64/lib/clear_user.S | 7 ++-----
> 1 file changed, 2 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
> index a7efb2ad2a1c..dac13df4a1ed 100644
> --- a/arch/arm64/lib/clear_user.S
> +++ b/arch/arm64/lib/clear_user.S
> @@ -45,13 +45,10 @@ USER(9f, sttrh wzr, [x0])
> USER(7f, sttrb wzr, [x2, #-1])
> 5: mov x0, #0
> ret
> -SYM_FUNC_END(__arch_clear_user)
> -EXPORT_SYMBOL(__arch_clear_user)
>
> - .section .fixup,"ax"
The one useful purpose this did serve is to provide a handy visual cue -
if you have any more concrete reason to respin the series, would you
mind sticking in a little comment like "// Exception fixup" here and in
the other two assembly routines, just so it's harder to overlook that
the preceding ret is the normal exit path?
Either way, for patches 1-3,
Acked-by: Robin Murphy <robin.murphy at arm.com>
I got totally lost trying to follow the _ASM_EXTABLE_UACCESS_* business,
but I don't think the rest of the series gets in the way of any
outstanding plans either (and FWIW I always thought "fixup->fixup" was
pretty awful so feel free to have an ack for patch #9 as well). Indeed,
I guess the new type field should mean that we can implement "proper"
address-aware fault handlers without the Itanium trick if we still need to.
Cheers,
Robin
> - .align 2
> 7: sub x0, x2, #5 // Adjust for faulting on the final byte...
> 8: add x0, x0, #4 // ...or the second word of the 4-7 byte case
> 9: sub x0, x2, x0
> ret
> - .previous
> +SYM_FUNC_END(__arch_clear_user)
> +EXPORT_SYMBOL(__arch_clear_user)
>
More information about the linux-arm-kernel
mailing list