[PATCH v15 5/6] arm64: Create a list of SYM_CODE functions, check return PC against list

Madhavan T. Venkataraman madvenka at linux.microsoft.com
Sun Jun 26 22:06:49 PDT 2022



On 6/26/22 03:46, Mark Rutland wrote:
> On Fri, Jun 17, 2022 at 04:07:16PM -0500, madvenka at linux.microsoft.com wrote:
>> From: "Madhavan T. Venkataraman" <madvenka at linux.microsoft.com>
>>
>> SYM_CODE functions don't follow the usual calling conventions. Check if the
>> return PC in a stack frame falls in any of these. If it does, consider the
>> stack trace unreliable.
>>
>> Define a special section for unreliable functions
>> =================================================
>>
>> Define a SYM_CODE_END() macro for arm64 that adds the function address
>> range to a new section called "sym_code_functions".
>>
>> Linker file
>> ===========
>>
>> Include the "sym_code_functions" section under read-only data in
>> vmlinux.lds.S.
>>
>> Initialization
>> ==============
>>
>> Define an early_initcall() to create a sym_code_functions[] array from
>> the linker data.
>>
>> Unwinder check
>> ==============
>>
>> Add a reliability check in unwind_check_reliability() that compares a
>> return PC with sym_code_functions[]. If there is a match, then return
>> failure.
>>
>> Signed-off-by: Madhavan T. Venkataraman <madvenka at linux.microsoft.com>
>> Reviewed-by: Mark Brown <broonie at kernel.org>
>> ---
>>  arch/arm64/include/asm/linkage.h  | 11 +++++++
>>  arch/arm64/include/asm/sections.h |  1 +
>>  arch/arm64/kernel/stacktrace.c    | 55 +++++++++++++++++++++++++++++++
>>  arch/arm64/kernel/vmlinux.lds.S   | 10 ++++++
>>  4 files changed, 77 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h
>> index 43f8c25b3fda..d4058de4af78 100644
>> --- a/arch/arm64/include/asm/linkage.h
>> +++ b/arch/arm64/include/asm/linkage.h
>> @@ -39,4 +39,15 @@
>>  	SYM_START(name, SYM_L_WEAK, SYM_A_NONE)		\
>>  	bti c ;
>>  
>> +/*
>> + * Record the address range of each SYM_CODE function in a struct code_range
>> + * in a special section.
>> + */
>> +#define SYM_CODE_END(name)				\
>> +	SYM_END(name, SYM_T_NONE)			;\
>> +99:	.pushsection "sym_code_functions", "aw"		;\
>> +	.quad	name					;\
>> +	.quad	99b					;\
>> +	.popsection
>> +
>>  #endif
>> diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
>> index 40971ac1303f..50cfd1083563 100644
>> --- a/arch/arm64/include/asm/sections.h
>> +++ b/arch/arm64/include/asm/sections.h
>> @@ -22,6 +22,7 @@ extern char __irqentry_text_start[], __irqentry_text_end[];
>>  extern char __mmuoff_data_start[], __mmuoff_data_end[];
>>  extern char __entry_tramp_text_start[], __entry_tramp_text_end[];
>>  extern char __relocate_new_kernel_start[], __relocate_new_kernel_end[];
>> +extern char __sym_code_functions_start[], __sym_code_functions_end[];
>>  
>>  static inline size_t entry_tramp_text_size(void)
>>  {
>> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
>> index 5ef2ce217324..eda8581f7dbe 100644
>> --- a/arch/arm64/kernel/stacktrace.c
>> +++ b/arch/arm64/kernel/stacktrace.c
>> @@ -62,6 +62,31 @@ struct unwind_state {
>>  	bool reliable;
>>  };
>>  
>> +struct code_range {
>> +	unsigned long	start;
>> +	unsigned long	end;
>> +};
>> +
>> +static struct code_range	*sym_code_functions;
>> +static int			num_sym_code_functions;
>> +
>> +int __init init_sym_code_functions(void)
>> +{
>> +	size_t size = (unsigned long)__sym_code_functions_end -
>> +		      (unsigned long)__sym_code_functions_start;
>> +
>> +	sym_code_functions = (struct code_range *)__sym_code_functions_start;
>> +	/*
>> +	 * Order it so that sym_code_functions is not visible before
>> +	 * num_sym_code_functions.
>> +	 */
>> +	smp_mb();
>> +	num_sym_code_functions = size / sizeof(struct code_range);
>> +
>> +	return 0;
>> +}
>> +early_initcall(init_sym_code_functions);
> 
> There's no reason to need an initcall for this; we can iterate over this
> directly using __sym_code_functions_start and __sym_code_functions_end, like we
> do for exception tables today.
> 
> For example:
> 
> static inline bool pc_is_sym_code(unsigned long pc)
> {
> 	extern struct code_range *__sym_code_functions_start;
> 	extern struct code_range *__sym_code_functions_end;
> 
> 	struct code_range *r;
> 
> 	for (r = __sym_code_functions_start; r < __sym_code_functions_end; r++) {
> 		if (pc >= r->start && pc < r->end)
> 			return true;
> 	}
> 
> 	return false;
> }
> 

OK.

However, I have decided to hold off on the reliability checks until we have the right
structure in the unwind code. I am also trying to address the question of reliability
with a single FP check in my FP validation series.

So, for now, I will remove the reliability checks part of the patch series.

Thanks for the review though. It will be useful when I revisit this in the future and
resend.

Thanks.

Madhavan



More information about the linux-arm-kernel mailing list