[PATCH v3] RISC-V: Don't check text_mutex during stop_machine
Changbin Du
changbin.du at huawei.com
Thu Feb 16 03:31:26 PST 2023
On Wed, Feb 15, 2023 at 04:43:17PM +0000, Conor Dooley wrote:
> From: Palmer Dabbelt <palmerdabbelt at google.com>
>
> We're currently using stop_machine() to update ftrace, which means that
> the thread that takes text_mutex during ftrace_prepare() may not be the
> same as the thread that eventually patches the code. This isn't
> actually a race because the lock is still held (preventing any other
> concurrent accesses) and there is only one thread running during
> stop_machine(), but it does trigger a lockdep failure.
>
> This patch just elides the lockdep check during stop_machine.
>
> Fixes: c15ac4fd60d5 ("riscv/ftrace: Add dynamic function tracer support")
> Suggested-by: Steven Rostedt <rostedt at goodmis.org>
> Reported-by: Changbin Du <changbin.du at gmail.com>
> Signed-off-by: Palmer Dabbelt <palmerdabbelt at google.com>
> Signed-off-by: Palmer Dabbelt <palmer at rivosinc.com>
> Signed-off-by: Conor Dooley <conor.dooley at microchip.com>
> ---
> Resending this version as I am quite averse to deleting the assertion!
>
> Changes since v2 [<20220322022331.32136-1-palmer at rivosinc.com>]:
> * rebase on riscv/for-next as it as been a year.
> * incorporate Changbin's suggestion that init_nop should take the lock
> rather than call prepare() & post_process().
>
> Changes since v1 [<20210506071041.417854-1-palmer at dabbelt.com>]:
> * Use ftrace_arch_ocde_modify_{prepare,post_process}() to set the flag.
> I remember having a reason I wanted the function when I wrote the v1,
> but it's been almost a year and I forget what that was -- maybe I was
> just crazy, the patch was sent at midnight.
> * Fix DYNAMIC_FTRACE=n builds.
> ---
> arch/riscv/include/asm/ftrace.h | 7 +++++++
> arch/riscv/kernel/ftrace.c | 15 +++++++++++++--
> arch/riscv/kernel/patch.c | 10 +++++++++-
> 3 files changed, 29 insertions(+), 3 deletions(-)
>
> diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
> index 04dad3380041..3ac7609f4ee9 100644
> --- a/arch/riscv/include/asm/ftrace.h
> +++ b/arch/riscv/include/asm/ftrace.h
> @@ -81,8 +81,15 @@ do { \
> struct dyn_ftrace;
> int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
> #define ftrace_init_nop ftrace_init_nop
> +extern int riscv_ftrace_in_stop_machine;
> #endif
>
> +#else /* CONFIG_DYNAMIC_FTRACE */
> +
> +#ifndef __ASSEMBLY__
> +#define riscv_ftrace_in_stop_machine 0
> #endif
>
> +#endif /* CONFIG_DYNAMIC_FTRACE */
> +
> #endif /* _ASM_RISCV_FTRACE_H */
> diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
> index 2086f6585773..661bfa72f359 100644
> --- a/arch/riscv/kernel/ftrace.c
> +++ b/arch/riscv/kernel/ftrace.c
> @@ -11,14 +11,25 @@
> #include <asm/cacheflush.h>
> #include <asm/patch.h>
>
> +int riscv_ftrace_in_stop_machine;
> +
> #ifdef CONFIG_DYNAMIC_FTRACE
> void ftrace_arch_code_modify_prepare(void) __acquires(&text_mutex)
> {
> mutex_lock(&text_mutex);
> +
> + /*
> + * The code sequences we use for ftrace can't be patched while the
> + * kernel is running, so we need to use stop_machine() to modify them
> + * for now. This doesn't play nice with text_mutex, we use this flag
> + * to elide the check.
> + */
> + riscv_ftrace_in_stop_machine = true;
> }
>
> void ftrace_arch_code_modify_post_process(void) __releases(&text_mutex)
> {
> + riscv_ftrace_in_stop_machine = false;
> mutex_unlock(&text_mutex);
> }
>
> @@ -134,9 +145,9 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
> {
> int out;
>
> - ftrace_arch_code_modify_prepare();
> + mutex_lock(&text_mutex);
> out = ftrace_make_nop(mod, rec, MCOUNT_ADDR);
> - ftrace_arch_code_modify_post_process();
> + mutex_unlock(&text_mutex);
>
> return out;
> }
> diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> index 765004b60513..56b70271518d 100644
> --- a/arch/riscv/kernel/patch.c
> +++ b/arch/riscv/kernel/patch.c
> @@ -11,6 +11,7 @@
> #include <asm/kprobes.h>
> #include <asm/cacheflush.h>
> #include <asm/fixmap.h>
> +#include <asm/ftrace.h>
> #include <asm/patch.h>
>
> struct patch_insn {
> @@ -59,8 +60,15 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
> * Before reaching here, it was expected to lock the text_mutex
> * already, so we don't need to give another lock here and could
> * ensure that it was safe between each cores.
> + *
> + * We're currently using stop_machine() for ftrace, and while that
> + * ensures text_mutex is held before installing the mappings it does
> + * not ensure text_mutex is held by the calling thread. That's safe
> + * but triggers a lockdep failure, so just elide it for that specific
> + * case.
> */
> - lockdep_assert_held(&text_mutex);
> + if (!riscv_ftrace_in_stop_machine)
> + lockdep_assert_held(&text_mutex);
>
> if (across_pages)
> patch_map(addr + len, FIX_TEXT_POKE1);
This misses this function.
int patch_text(void *addr, u32 insn)
{
struct patch_insn patch = {
.addr = addr,
.insn = insn,
.cpu_count = ATOMIC_INIT(0),
};
return stop_machine_cpuslocked(patch_text_cb,
&patch, cpu_online_mask);
}
> --
> 2.39.1
>
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
>
--
Cheers,
Changbin Du
More information about the linux-riscv
mailing list