[RFC PATCH 12/16] kvm/arm64: make alternative callbacks safe
Marc Zyngier
maz at kernel.org
Mon Oct 20 10:05:03 PDT 2025
nit: please keep $SUBJECT in keeping with the subsystem you are
patching: "KVM: arm64: Make alternative callbacks safe"
On Tue, 23 Sep 2025 18:48:59 +0100,
Ada Couprie Diaz <ada.coupriediaz at arm.com> wrote:
>
> Alternative callback functions are regular functions, which means they
> or any function they call could get patched or instrumented
> by alternatives or other parts of the kernel.
> Given that applying alternatives does not guarantee a consistent state
> while patching, only once done, and handles cache maintenance manually,
> it could lead to nasty corruptions and execution of bogus code.
>
> Make the KVM alternative callbacks safe by marking them `noinstr` and
> `__always_inline`'ing their helpers.
> This is possible thanks to previous commits making `aarch64_insn_...`
> functions used in the callbacks safe to inline.
>
> `kvm_update_va_mask()` is already marked as `__init`, which has its own
> text section conflicting with the `noinstr` one.
> Instead, use `__no_instr_section(".noinstr.text")` to add
> all the function attributes added by `noinstr`, without the section
> conflict.
> This can be an issue, as kprobes seems to only block the text sections,
> not based on function attributes.
>
> Signed-off-by: Ada Couprie Diaz <ada.coupriediaz at arm.com>
> ---
> This is missing `kvm_patch_vector_branch()`, which could receive the same
> treatment, but the `WARN_ON_ONCE` in the early-exit check would make it
> call into instrumentable code.
> I do not currently know if this `WARN` can safely be removed or if it
> has some importance.
This is only debug code, which could be easily removed. However, I'd
like to suggest that we add a method to indicate that a patching
callback has failed for whatever reason. This doesn't have to be a
complex piece of infrastructure, and can be best effort (you can only
fail a single callback in a single location).
But it would certainly help catching unexpected situations (been
there, done that, ended up with visible scars...).
> ---
> arch/arm64/kvm/va_layout.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
> index 91b22a014610..3ebb7e0074f6 100644
> --- a/arch/arm64/kvm/va_layout.c
> +++ b/arch/arm64/kvm/va_layout.c
> @@ -109,7 +109,7 @@ __init void kvm_apply_hyp_relocations(void)
> }
> }
>
> -static u32 compute_instruction(int n, u32 rd, u32 rn)
> +static __always_inline u32 compute_instruction(int n, u32 rd, u32 rn)
> {
> u32 insn = AARCH64_BREAK_FAULT;
>
> @@ -151,6 +151,7 @@ static u32 compute_instruction(int n, u32 rd, u32 rn)
> return insn;
> }
>
> +__noinstr_section(".init.text")
> void __init kvm_update_va_mask(struct alt_instr *alt,
> __le32 *origptr, __le32 *updptr, int nr_inst)
> {
> @@ -241,7 +242,8 @@ void kvm_patch_vector_branch(struct alt_instr *alt,
> *updptr++ = cpu_to_le32(insn);
> }
>
> -static void generate_mov_q(u64 val, __le32 *origptr, __le32 *updptr, int nr_inst)
> +static __always_inline void generate_mov_q(u64 val, __le32 *origptr,
> + __le32 *updptr, int nr_inst)
> {
> u32 insn, oinsn, rd;
>
generate_mov_q() (and others) start with a BUG_ON(), which may not be
recoverable, just like WARN_ON() above. That's where we should be able
to fail (more or less) gracefully and at least indicate the failure.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
More information about the linux-arm-kernel
mailing list