[PATCH v3 1/2] ftrace: disable preemption between ftrace_test_recursion_trylock/unlock()
Petr Mladek
pmladek at suse.com
Thu Oct 14 08:14:07 PDT 2021
On Wed 2021-10-13 16:51:46, 王贇 wrote:
> As the documentation explained, ftrace_test_recursion_trylock()
> and ftrace_test_recursion_unlock() were supposed to disable and
> enable preemption properly, however currently this work is done
> outside of the function, which could be missing by mistake.
>
> This path will make sure the preemption was disabled when trylock()
> succeed, and the unlock() will enable the preemption if previously
> enabled.
>
> diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h
> index a9f9c57..58e474c 100644
> --- a/include/linux/trace_recursion.h
> +++ b/include/linux/trace_recursion.h
> @@ -214,7 +214,18 @@ static __always_inline void trace_clear_recursion(int bit)
We should also update the description above the function, for example:
/**
* ftrace_test_recursion_trylock - tests for recursion in same context
*
* Use this for ftrace callbacks. This will detect if the function
* tracing recursed in the same context (normal vs interrupt),
*
* Returns: -1 if a recursion happened.
- * >= 0 if no recursion
+ * >= 0 if no recursion (success)
+ *
+ * Disables the preemption on success. It is just for a convenience.
+ * Current users needed to disable the preemtion for some reasons.
*/
> static __always_inline int ftrace_test_recursion_trylock(unsigned long ip,
> unsigned long parent_ip)
> {
> - return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START, TRACE_FTRACE_MAX);
> + int bit;
> +
> + bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START, TRACE_FTRACE_MAX);
> + /*
> + * The zero bit indicate we are nested
> + * in another trylock(), which means the
> + * preemption already disabled.
> + */
> + if (bit > 0)
> + preempt_disable_notrace();
Is this safe? The preemption is disabled only when
trace_test_and_set_recursion() was called by ftrace_test_recursion_trylock().
We must either always disable the preemtion when bit >= 0.
Or we have to disable the preemtion already in
trace_test_and_set_recursion().
Finally, the comment confused me a lot. The difference between nesting and
recursion is far from clear. And the code is tricky liky like hell :-)
I propose to add some comments, see below for a proposal.
> +
> + return bit;
> }
> /**
> @@ -222,9 +233,13 @@ static __always_inline int ftrace_test_recursion_trylock(unsigned long ip,
> * @bit: The return of a successful ftrace_test_recursion_trylock()
> *
> * This is used at the end of a ftrace callback.
> + *
> + * Preemption will be enabled (if it was previously enabled).
> */
> static __always_inline void ftrace_test_recursion_unlock(int bit)
> {
> + if (bit)
This is not symetric with trylock(). It should be:
if (bit > 0)
Anyway, trace_clear_recursion() quiently ignores bit != 0
> + preempt_enable_notrace();
> trace_clear_recursion(bit);
> }
Below is my proposed patch that tries to better explain the recursion
check:
>From 20d69f11e2683262fa0043b803999061cbac543f Mon Sep 17 00:00:00 2001
From: Petr Mladek <pmladek at suse.com>
Date: Thu, 14 Oct 2021 16:57:39 +0200
Subject: [PATCH] trace: Better describe the recursion check return values
The trace recursion check might be called recursively by different
layers of the tracing code. It is safe recursion and the check
is even optimized for this case.
The problematic recursion is when the traced function is called
by the tracing code. This is properly detected.
Try to explain this difference a better way.
Signed-off-by: Petr Mladek <pmladek at suse.com>
---
include/linux/trace_recursion.h | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h
index a9f9c5714e65..b5efb804efdf 100644
--- a/include/linux/trace_recursion.h
+++ b/include/linux/trace_recursion.h
@@ -159,13 +159,27 @@ extern void ftrace_record_recursion(unsigned long ip, unsigned long parent_ip);
# define do_ftrace_record_recursion(ip, pip) do { } while (0)
#endif
+/*
+ * trace_test_and_set_recursion() is called on several layers
+ * of the ftrace code when handling the same ftrace entry.
+ * These calls might be nested/recursive.
+ *
+ * It uses TRACE_LIST_*BITs to distinguish between this
+ * internal recursion and recursion caused by calling
+ * the traced function by the ftrace code.
+ *
+ * Returns: > 0 when no recursion
+ * 0 when called recursively internally (safe)
+ * -1 when the traced function was called recursively from
+ * the ftrace handler (unsafe)
+ */
static __always_inline int trace_test_and_set_recursion(unsigned long ip, unsigned long pip,
int start, int max)
{
unsigned int val = READ_ONCE(current->trace_recursion);
int bit;
- /* A previous recursion check was made */
+ /* Called recursively internally by different ftrace code layers? */
if ((val & TRACE_CONTEXT_MASK) > max)
return 0;
--
2.26.2
More information about the linux-riscv
mailing list