[PATCH v4 2/8] arm64/runtime-const: Use aarch64_insn_patch_text_nosync() for patching
K Prateek Nayak
kprateek.nayak at amd.com
Thu Apr 30 02:47:24 PDT 2026
The current scheme to directly patch the kernel text for runtime
constants runs into the following issue with futex adapted to using
runtime constants on arm64:
Unable to handle kernel write to read-only memory at virtual address ...
The pc points to the *p assignment in the following call chain:
futex_init()
runtime_const_init(shift, __futex_shift)
__runtime_fixup_shift()
*p = cpu_to_le32(insn);
which suggests that core_initcall() is too late to patch the kernel text
directly unlike the "d_hash_shift" which is initialized during
vfs_caches_init_early() before the protections are in place.
Use aarch64_insn_patch_text_nosync() to patch the runtime constants
instead of doing it directly to allow runtime_const_init() slightly
later into the boot.
Since aarch64_insn_patch_text_nosync() calls caches_clean_inval_pou()
internally, __runtime_fixup_caches() ends up being redundant.
runtime_const_init() are rare and the overheads of multiple calls to
caches_clean_inval_pou() instead of batching them together should be
negligible in practice.
The cpu_to_le32() conversion of instruction isn't necessary since it is
handled later in the aarch64_insn_patch_text_nosync() call-chain:
aarch64_insn_patch_text_nosync(addr, insn)
aarch64_insn_write(addr, insn)
__aarch64_insn_write(addr, cpu_to_le32(insn))
Sashiko noted that aarch64_insn_patch_text_nosync() does not expect a
lm_alias() address and Catalin suggested it is safe to drop the
lm_alias() for runtime patching since the kernel text is readable. The
address passed to fixup function is interpreted as a __le32 and
dereferenced as is to read the opcode at the patch site.
No functional changes are intended.
Signed-off-by: K Prateek Nayak <kprateek.nayak at amd.com>
---
changelog v3..v4:
o Dropped the lm_alias() and use the patch location as is for
aarch64_insn_patch_text_nosync(). (Sashiko, Catalin)
---
arch/arm64/include/asm/runtime-const.h | 17 +++++------------
1 file changed, 5 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/include/asm/runtime-const.h b/arch/arm64/include/asm/runtime-const.h
index c3dbd3ae68f69..838145bc289d2 100644
--- a/arch/arm64/include/asm/runtime-const.h
+++ b/arch/arm64/include/asm/runtime-const.h
@@ -7,6 +7,7 @@
#endif
#include <asm/cacheflush.h>
+#include <asm/text-patching.h>
/* Sigh. You can still run arm64 in BE mode */
#include <asm/byteorder.h>
@@ -50,34 +51,26 @@ static inline void __runtime_fixup_16(__le32 *p, unsigned int val)
u32 insn = le32_to_cpu(*p);
insn &= 0xffe0001f;
insn |= (val & 0xffff) << 5;
- *p = cpu_to_le32(insn);
-}
-
-static inline void __runtime_fixup_caches(void *where, unsigned int insns)
-{
- unsigned long va = (unsigned long)where;
- caches_clean_inval_pou(va, va + 4*insns);
+ aarch64_insn_patch_text_nosync(p, insn);
}
static inline void __runtime_fixup_ptr(void *where, unsigned long val)
{
- __le32 *p = lm_alias(where);
+ __le32 *p = where;
__runtime_fixup_16(p, val);
__runtime_fixup_16(p+1, val >> 16);
__runtime_fixup_16(p+2, val >> 32);
__runtime_fixup_16(p+3, val >> 48);
- __runtime_fixup_caches(where, 4);
}
/* Immediate value is 6 bits starting at bit #16 */
static inline void __runtime_fixup_shift(void *where, unsigned long val)
{
- __le32 *p = lm_alias(where);
+ __le32 *p = where;
u32 insn = le32_to_cpu(*p);
insn &= 0xffc0ffff;
insn |= (val & 63) << 16;
- *p = cpu_to_le32(insn);
- __runtime_fixup_caches(where, 1);
+ aarch64_insn_patch_text_nosync(p, insn);
}
static inline void runtime_const_fixup(void (*fn)(void *, unsigned long),
--
2.34.1
More information about the linux-riscv
mailing list