[PATCH v3 2/7] arm64/runtime-const: Use aarch64_insn_patch_text_nosync() for patching

K Prateek Nayak kprateek.nayak at amd.com
Sat Apr 11 12:54:45 PDT 2026


Hello Catalin,

Thank you for taking a look at the series and pointing to the Shashiko
review.

On 4/10/2026 3:07 PM, Catalin Marinas wrote:
>> -static inline void __runtime_fixup_caches(void *where, unsigned int insns)
>> -{
>> -     unsigned long va = (unsigned long)where;
>> -     caches_clean_inval_pou(va, va + 4*insns);
>> +     aarch64_insn_patch_text_nosync(p, insn);
>>  }
> 
> Sashiko has some good points here:

Ack! I did check the Sashiko review a few days after posting. I think
I'll probably start replying to Shahsiko's inline review on future
threads on LKML to keep the record like some folks are doing.

> 
> https://sashiko.dev/#/patchset/20260402112250.2138-1-kprateek.nayak@amd.com
> 
> In short, aarch64_insn_patch_text_nosync() does not expect a linear map
> address but rather a kernel text one (or vmalloc/modules). The other
> valid point is on aliasing I-caches.
> 
> I think dropping the lm_alias() and just use 'where' directly would do
> but I haven't tried.

Ack! I completely missed that subtlety of passing "where" to
caches_clean_inval_pou(). I'm still surprised that it didn't
blow up in my testing.

Anyhow, following diff, on top of the full series builds and
tests fine and has been blessed by review-prompts:

diff --git a/arch/arm64/include/asm/runtime-const.h b/arch/arm64/include/asm/runtime-const.h
index 21f817eb5951..d3f0dfa7ced0 100644
--- a/arch/arm64/include/asm/runtime-const.h
+++ b/arch/arm64/include/asm/runtime-const.h
@@ -57,21 +57,21 @@
 } while (0)
 
 /* 16-bit immediate for wide move (movz and movk) in bits 5..20 */
-static inline void __runtime_fixup_16(__le32 *p, unsigned int val)
+static inline void __runtime_fixup_16(void *where, unsigned int val)
 {
+	__le32 *p = lm_alias(where);
 	u32 insn = le32_to_cpu(*p);
 	insn &= 0xffe0001f;
 	insn |= (val & 0xffff) << 5;
-	aarch64_insn_patch_text_nosync(p, insn);
+	aarch64_insn_patch_text_nosync(where, insn);
 }
 
 static inline void __runtime_fixup_ptr(void *where, unsigned long val)
 {
-	__le32 *p = lm_alias(where);
-	__runtime_fixup_16(p, val);
-	__runtime_fixup_16(p+1, val >> 16);
-	__runtime_fixup_16(p+2, val >> 32);
-	__runtime_fixup_16(p+3, val >> 48);
+	__runtime_fixup_16(where, val);
+	__runtime_fixup_16(where + 4, val >> 16);
+	__runtime_fixup_16(where + 8, val >> 32);
+	__runtime_fixup_16(where + 12, val >> 48);
 }
 
 /* Immediate value is 6 bits starting at bit #16 */
@@ -81,15 +81,14 @@ static inline void __runtime_fixup_shift(void *where, unsigned long val)
 	u32 insn = le32_to_cpu(*p);
 	insn &= 0xffc0ffff;
 	insn |= (val & 63) << 16;
-	aarch64_insn_patch_text_nosync(p, insn);
+	aarch64_insn_patch_text_nosync(where, insn);
 }
 
 /* Immediate value is 6 bits starting at bit #16 */
 static inline void __runtime_fixup_mask(void *where, unsigned long val)
 {
-	__le32 *p = lm_alias(where);
-	__runtime_fixup_16(p, val);
-	__runtime_fixup_16(p+1, val >> 16);
+	__runtime_fixup_16(where, val);
+	__runtime_fixup_16(where + 4, val >> 16);
 }
 
 static inline void runtime_const_fixup(void (*fn)(void *, unsigned long),
---

I'll do some more sanity checks and address the rest of the comments
before posting out v4 soon after the merge window. Thank you again
for your feedback. Much appreciated.

-- 
Thanks and Regards,
Prateek




More information about the linux-riscv mailing list