[PATCH v6 6/8] x86/module: prepare module loading for ROX allocations of text
Steven Rostedt
rostedt at goodmis.org
Wed Oct 16 14:01:28 PDT 2024
On Wed, 16 Oct 2024 15:24:22 +0300
Mike Rapoport <rppt at kernel.org> wrote:
> diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> index 8da0e66ca22d..b498897b213c 100644
> --- a/arch/x86/kernel/ftrace.c
> +++ b/arch/x86/kernel/ftrace.c
> @@ -118,10 +118,13 @@ ftrace_modify_code_direct(unsigned long ip, const char *old_code,
> return ret;
>
> /* replace the text with the new text */
> - if (ftrace_poke_late)
> + if (ftrace_poke_late) {
> text_poke_queue((void *)ip, new_code, MCOUNT_INSN_SIZE, NULL);
> - else
> - text_poke_early((void *)ip, new_code, MCOUNT_INSN_SIZE);
> + } else {
> + mutex_lock(&text_mutex);
> + text_poke((void *)ip, new_code, MCOUNT_INSN_SIZE);
> + mutex_unlock(&text_mutex);
> + }
> return 0;
> }
So this slows down the boot by over 30ms. That may not sound like much, but
we care very much about boot times. This code is serialized with boot and
runs whenever ftrace is configured in the kernel. The way I measured this,
was that I added:
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 4dd0ad6c94d6..b72bb9943140 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -104,6 +104,8 @@ static int ftrace_verify_code(unsigned long ip, const char *old_code)
return 0;
}
+u64 sdr_total;
+
/*
* Marked __ref because it calls text_poke_early() which is .init.text. That is
* ok because that call will happen early, during boot, when .init sections are
@@ -114,6 +116,8 @@ ftrace_modify_code_direct(unsigned long ip, const char *old_code,
const char *new_code)
{
int ret = ftrace_verify_code(ip, old_code);
+ u64 start, stop;
+
if (ret)
return ret;
@@ -121,9 +125,12 @@ ftrace_modify_code_direct(unsigned long ip, const char *old_code,
if (ftrace_poke_late) {
text_poke_queue((void *)ip, new_code, MCOUNT_INSN_SIZE, NULL);
} else {
+ start = trace_clock_local();
mutex_lock(&text_mutex);
text_poke((void *)ip, new_code, MCOUNT_INSN_SIZE);
mutex_unlock(&text_mutex);
+ stop = trace_clock_local();
+ sdr_total += stop - start;
}
return 0;
}
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index c01375adc471..93284557144d 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -10738,6 +10738,11 @@ __init static int tracer_alloc_buffers(void)
register_snapshot_cmd();
+ {
+ extern u64 sdr_total;
+ printk("SDR TOTAL: %lld\n", sdr_total);
+ }
+
test_can_verify();
return 0;
And did the same before this patch. I ran it three times and have the
following numbers (all in nanoseconds):
before: 11356637 11863526 11507750
after: 43978750 41293162 42741067
Before this patch, the total updates took 11ms. After the patch it takes
around 42ms. This is because we are patching 59 thousand sites with this.
# dmesg |grep ftrace
[ 1.620569] ftrace: allocating 59475 entries in 233 pages
[ 1.667178] ftrace: allocated 233 pages with 5 groups
If this is only needed for module load, can we at least still use the
text_poke_early() at boot up?
if (ftrace_poke_late) {
text_poke_queue((void *)ip, new_code, MCOUNT_INSN_SIZE, NULL);
} else if (system_state == SYSTEM_BOOTING) {
text_poke_early((void *)ip, new_code, MCOUNT_INSN_SIZE);
} else {
mutex_lock(&text_mutex);
text_poke((void *)ip, new_code, MCOUNT_INSN_SIZE);
mutex_unlock(&text_mutex);
}
?
The above if statement looks to slow things down just slightly, but only by
2ms, which is more reasonable.
-- Steve
More information about the linux-riscv
mailing list