[PATCH 1/3] riscv/kprobe: Optimize the performance of patching single-step slot
Liao Chang
liaochang1 at huawei.com
Fri Sep 23 01:46:56 PDT 2022
Single-step slot would not be used until kprobe is enabled, that means
no race condition occurs on it under SMP, hence it is safe to pacth ss
slot without stopping machine.
Acked-by: Masami Hiramatsu (Google) <mhiramat at kernel.org>
Signed-off-by: Liao Chang <liaochang1 at huawei.com>
---
arch/riscv/kernel/probes/kprobes.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
index e6e950b7cf32..bc1f39b96e41 100644
--- a/arch/riscv/kernel/probes/kprobes.c
+++ b/arch/riscv/kernel/probes/kprobes.c
@@ -24,12 +24,14 @@ post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs *);
static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
{
unsigned long offset = GET_INSN_LENGTH(p->opcode);
+ kprobe_opcode_t slot[MAX_INSN_SIZE];
p->ainsn.api.restore = (unsigned long)p->addr + offset;
- patch_text(p->ainsn.api.insn, p->opcode);
- patch_text((void *)((unsigned long)(p->ainsn.api.insn) + offset),
- __BUG_INSN_32);
+ memcpy(slot, &p->opcode, offset);
+ *(kprobe_opcode_t *)((unsigned long)slot + offset) = __BUG_INSN_32;
+ patch_text_nosync(p->ainsn.api.insn, slot,
+ offset + GET_INSN_LENGTH(__BUG_INSN_32));
}
static void __kprobes arch_prepare_simulate(struct kprobe *p)
--
2.17.1
More information about the linux-arm-kernel
mailing list