[RFC PATCH v2 7/8] arm64: Detect kretprobed functions in stack trace
madvenka at linux.microsoft.com
madvenka at linux.microsoft.com
Mon Mar 15 16:57:59 GMT 2021
From: "Madhavan T. Venkataraman" <madvenka at linux.microsoft.com>
When a kretprobe is active for a function, the function's return address
in its stack frame is modified to point to the kretprobe trampoline. When
the function returns, the frame is popped and control is transferred
to the trampoline. The trampoline eventually returns to the original return
address.
If a stack walk is done within the function (or any functions that get
called from there), the stack trace will only show the trampoline and the
not the original caller. Detect this and mark the stack trace as unreliable.
Also, if the trampoline and the functions it calls do a stack trace,
that stack trace will also have the same problem. Detect this as well.
This is done by looking up the symbol table entry for the trampoline
and checking if the return PC in a frame falls anywhere in the
trampoline function.
Signed-off-by: Madhavan T. Venkataraman <madvenka at linux.microsoft.com>
---
arch/arm64/kernel/stacktrace.c | 43 ++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 358aae3906d7..752b77f11c61 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -18,6 +18,26 @@
#include <asm/stack_pointer.h>
#include <asm/stacktrace.h>
+#ifdef CONFIG_KRETPROBES
+static bool kretprobe_detected(struct stackframe *frame)
+{
+ static char kretprobe_name[KSYM_NAME_LEN];
+ static unsigned long kretprobe_pc, kretprobe_end_pc;
+ unsigned long pc, offset, size;
+
+ if (!kretprobe_pc) {
+ pc = (unsigned long) kretprobe_trampoline;
+ if (!kallsyms_lookup(pc, &size, &offset, NULL, kretprobe_name))
+ return false;
+
+ kretprobe_pc = pc - offset;
+ kretprobe_end_pc = kretprobe_pc + size;
+ }
+
+ return frame->pc >= kretprobe_pc && frame->pc < kretprobe_end_pc;
+}
+#endif
+
static void check_if_reliable(unsigned long fp, struct stackframe *frame,
struct stack_info *info)
{
@@ -111,6 +131,29 @@ static void check_if_reliable(unsigned long fp, struct stackframe *frame,
frame->reliable = false;
return;
}
+
+#ifdef CONFIG_KRETPROBES
+ /*
+ * The return address of a function that has an active kretprobe
+ * is modified in the stack frame to point to a trampoline. So,
+ * the original return address is not available on the stack.
+ *
+ * A stack trace taken while executing the function (and its
+ * descendants) will not show the original caller. So, mark the
+ * stack trace as unreliable if the trampoline shows up in the
+ * stack trace. (Obtaining the original return address from
+ * task->kretprobe_instances seems problematic and not worth the
+ * effort).
+ *
+ * The stack trace taken while inside the trampoline and functions
+ * called by the trampoline have the same problem as above. This
+ * is also covered by kretprobe_detected() using a range check.
+ */
+ if (kretprobe_detected(frame)) {
+ frame->reliable = false;
+ return;
+ }
+#endif
}
/*
--
2.25.1
More information about the linux-arm-kernel
mailing list