[PATCH] riscv: hwprobe: Fix stale vDSO data for late-initialized keys at boot

Yixun Lan dlan at gentoo.org
Wed May 21 00:52:35 PDT 2025


Hi Jingwei,

On 13:27 Wed 21 May     , wangjingwei at iscas.ac.cn wrote:
> From: Jingwei Wang <wangjingwei at iscas.ac.cn>
> 
> The riscv_hwprobe vDSO data is populated by init_hwprobe_vdso_data(),
> an arch_initcall_sync. However, underlying data for some keys, like
> RISCV_HWPROBE_KEY_MISALIGNED_VECTOR_PERF, is determined asynchronously.
> 
> Specifically, the per_cpu(vector_misaligned_access, cpu) values are set
> by the vec_check_unaligned_access_speed_all_cpus kthread. This kthread
> is spawned by an earlier arch_initcall (check_unaligned_access_all_cpus)
> and may complete its benchmark *after* init_hwprobe_vdso_data() has
> already populated the vDSO with default/stale values.
> 
> This patch introduces riscv_hwprobe_vdso_sync(sync_key). This function
> is now called by the vec_check_unaligned_access_speed_all_cpus kthread
> upon its completion. It re-evaluates the specified key using current kernel
> state (including the finalized per-CPU data) via hwprobe_one_pair()
> and updates the corresponding entry in vdso_k_arch_data.
> 
Personally, it's unnecessary to explain the patch line by line, 
giving a high level summary would be great

> This ensures the vDSO accurately reflects the final boot-time values
> for keys determined by such asynchronous boot tasks, resolving observed
> inconsistencies when userspace starts.
> 
> Test by comparing vDSO and syscall results for affected keys
> (e.g., MISALIGNED_VECTOR_PERF), which now match their final
> boot-time values.
> 
> Reported-by: Tsukasa OI <research_trasio at irq.a4lg.com>
> Closes: https://lore.kernel.org/linux-riscv/760d637b-b13b-4518-b6bf-883d55d44e7f@irq.a4lg.com/
As you give a Closes tag here, so I'd ask

Can you check which commit introduced this problem? it might warrant
a Fixes tag, please also CC stable if needed

(Check the kernel commit log for what's format of Fixes tag,
 12 charactors for hash + full commit title)

> Signed-off-by: Jingwei Wang <wangjingwei at iscas.ac.cn>
> ---
>  arch/riscv/include/asm/hwprobe.h           |  4 ++++
>  arch/riscv/kernel/sys_hwprobe.c            | 16 ++++++++++++++++
>  arch/riscv/kernel/unaligned_access_speed.c |  4 +++-
>  3 files changed, 23 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/riscv/include/asm/hwprobe.h b/arch/riscv/include/asm/hwprobe.h
> index 1f690fea0e03de6a..02c34f03d8b9bc83 100644
> --- a/arch/riscv/include/asm/hwprobe.h
> +++ b/arch/riscv/include/asm/hwprobe.h
> @@ -40,4 +40,8 @@ static inline bool riscv_hwprobe_pair_cmp(struct riscv_hwprobe *pair,
>  	return pair->value == other_pair->value;
>  }
>  
> +#ifdef CONFIG_MMU
> +void riscv_hwprobe_vdso_sync(s64 sync_key);
> +#endif /* CONFIG_MMU */
> +
we usually try to avoid repeat "#ifdef CONFIG_MMU" in header & c file (unaligned_access_speed.c)
so, you can do

#ifdef CONFIG_MMU
void riscv_hwprobe_vdso_sync(s64 sync_key);
#else
static inline void riscv_hwprobe_vdso_sync(s64 sync_key) { };
#endif /* CONFIG_MMU */


>  #endif
> diff --git a/arch/riscv/kernel/sys_hwprobe.c b/arch/riscv/kernel/sys_hwprobe.c
> index 249aec8594a92a80..c2593bd766055d35 100644
> --- a/arch/riscv/kernel/sys_hwprobe.c
> +++ b/arch/riscv/kernel/sys_hwprobe.c
> @@ -17,6 +17,7 @@
>  #include <asm/vector.h>
>  #include <asm/vendor_extensions/thead_hwprobe.h>
>  #include <vdso/vsyscall.h>
> +#include <vdso/datapage.h>
>  
>  
>  static void hwprobe_arch_id(struct riscv_hwprobe *pair,
> @@ -500,6 +501,21 @@ static int __init init_hwprobe_vdso_data(void)
>  
>  arch_initcall_sync(init_hwprobe_vdso_data);
>  
> +void riscv_hwprobe_vdso_sync(s64 sync_key)
> +{
> +	struct vdso_arch_data *avd = vdso_k_arch_data;
> +	struct riscv_hwprobe pair;
> +
> +	pair.key = sync_key;
> +	hwprobe_one_pair(&pair, cpu_online_mask);
> +	/*
> +	 * Update vDSO data for the given key.
> +	 * Currently for non-ID key updates (e.g. MISALIGNED_VECTOR_PERF),
> +	 * so 'homogeneous_cpus' is not re-evaluated here.
> +	 */
> +	avd->all_cpu_hwprobe_values[sync_key] = pair.value;
> +}
> +
>  #endif /* CONFIG_MMU */
>  
>  SYSCALL_DEFINE5(riscv_hwprobe, struct riscv_hwprobe __user *, pairs,
> diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c
> index 585d2dcf2dab1ccb..7194aee3bf3234e5 100644
> --- a/arch/riscv/kernel/unaligned_access_speed.c
> +++ b/arch/riscv/kernel/unaligned_access_speed.c
> @@ -375,7 +375,9 @@ static void check_vector_unaligned_access(struct work_struct *work __always_unus
>  static int __init vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)
>  {
>  	schedule_on_each_cpu(check_vector_unaligned_access);
> -
> +#ifdef CONFIG_MMU
> +	riscv_hwprobe_vdso_sync(RISCV_HWPROBE_KEY_MISALIGNED_VECTOR_PERF);
> +#endif
>  	return 0;
>  }
>  #else /* CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS */
> -- 
> 2.49.0
> 
> 
> _______________________________________________
> linux-riscv mailing list
> linux-riscv at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv

-- 
Yixun Lan (dlan)



More information about the linux-riscv mailing list