[RFC] riscv: restore the irq save/restore logic for nosync code patching

Wu Zhangjin falcon at tinylab.org
Fri Jun 10 19:31:22 PDT 2022


The commit '0ff7c3b331276f584bde3ae9a16bacd8fa3d01e6' removed the
old patch_lock together with the irq save/restore logic.

But the patching interface: patch_text_nosync() has new users like
Jump Label, which doesn't use stop_machine(), here restores the irq
save/restore logic for such users, just as the other architectures do.

Move lockdep assert to the required path too, the current stop_machine()
based ftrace implementation should use the new added patch_text_noirq()
without this lockdep assert, it has its own mutex, but not named as
text_mutex.

As the latest maillist shows, a new ftrace support without
stop_machine() is in review, which requires the nosync version with this
irq save/restore logic or the new added noirq version (as we can see it
also calls patch_insn_write) with explicit text_mutex and irq
save/restore logic.

Signed-off-by: Wu Zhangjin <falcon at tinylab.org>
---
 arch/riscv/kernel/patch.c | 32 +++++++++++++++++++++++---------
 1 file changed, 23 insertions(+), 9 deletions(-)

diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
index 765004b60513..03f03832e077 100644
--- a/arch/riscv/kernel/patch.c
+++ b/arch/riscv/kernel/patch.c
@@ -55,13 +55,6 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
 	bool across_pages = (((uintptr_t) addr & ~PAGE_MASK) + len) > PAGE_SIZE;
 	int ret;
 
-	/*
-	 * Before reaching here, it was expected to lock the text_mutex
-	 * already, so we don't need to give another lock here and could
-	 * ensure that it was safe between each cores.
-	 */
-	lockdep_assert_held(&text_mutex);
-
 	if (across_pages)
 		patch_map(addr + len, FIX_TEXT_POKE1);
 
@@ -85,7 +78,8 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
 NOKPROBE_SYMBOL(patch_insn_write);
 #endif /* CONFIG_MMU */
 
-int patch_text_nosync(void *addr, const void *insns, size_t len)
+/* For stop_machine() or the other cases which have already disabled irq and have locked among cpu cores */
+int patch_text_noirq(void *addr, const void *insns, size_t len)
 {
 	u32 *tp = addr;
 	int ret;
@@ -97,6 +91,26 @@ int patch_text_nosync(void *addr, const void *insns, size_t len)
 
 	return ret;
 }
+NOKPROBE_SYMBOL(patch_text_noirq);
+
+int patch_text_nosync(void *addr, const void *insns, size_t len)
+{
+	int ret;
+	unsigned long flags;
+
+	/*
+	 * Before reaching here, it was expected to lock the text_mutex
+	 * already, so we don't need to give another lock here and could
+	 * ensure that it was safe between each cores.
+	 */
+	lockdep_assert_held(&text_mutex);
+
+	local_irq_save(flags);
+	ret = patch_text_noirq(addr, insns, len);
+	local_irq_restore(flags);
+
+	return ret;
+}
 NOKPROBE_SYMBOL(patch_text_nosync);
 
 static int patch_text_cb(void *data)
@@ -106,7 +120,7 @@ static int patch_text_cb(void *data)
 
 	if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
 		ret =
-		    patch_text_nosync(patch->addr, &patch->insn,
+		    patch_text_noirq(patch->addr, &patch->insn,
 					    GET_INSN_LENGTH(patch->insn));
 		atomic_inc(&patch->cpu_count);
 	} else {
-- 
2.35.1




More information about the linux-riscv mailing list