[PATCH v2 3/5] KVM: arm64: Refactor enter_exception64()

Marc Zyngier maz at kernel.org
Tue Jan 20 06:57:18 PST 2026


On Thu, 11 Dec 2025 11:38:26 +0000,
Fuad Tabba <tabba at google.com> wrote:
> 
> From: Quentin Perret <qperret at google.com>
> 
> To simplify the injection of exceptions into the host in pKVM context,
> refactor enter_exception64() to split out the logic for calculating the
> exception vector offset and the target CPSR.
> 
> Extract two new helper functions:
>  - get_except64_offset(): Calculates exception vector offset based on
>    current/target exception levels and exception type
>  - get_except64_cpsr(): Computes the new CPSR/PSTATE when taking an
>    exception
> 
> A subsequent patch will use these helpers to inject UNDEF exceptions
> into the host when MTE system registers are accessed with MTE disabled.
> Extracting the helpers allows that code to reuse the exception entry
> logic without duplicating the CPSR and vector offset calculations.
> 
> No functional change intended.
> 
> Signed-off-by: Quentin Perret <qperret at google.com>
> Signed-off-by: Fuad Tabba <tabba at google.com>
> ---
>  arch/arm64/include/asm/kvm_emulate.h |   5 ++
>  arch/arm64/kvm/hyp/exception.c       | 100 ++++++++++++++++-----------
>  2 files changed, 63 insertions(+), 42 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index c9eab316398e..c3f04bd5b2a5 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -71,6 +71,11 @@ static inline int kvm_inject_serror(struct kvm_vcpu *vcpu)
>  	return kvm_inject_serror_esr(vcpu, ESR_ELx_ISV);
>  }
>  
> +unsigned long get_except64_offset(unsigned long psr, unsigned long target_mode,
> +				  enum exception_type type);
> +unsigned long get_except64_cpsr(unsigned long old, bool has_mte,
> +				unsigned long sctlr, unsigned long mode);

s/cpsr/pstate/ as we don't need to introduce more 32bit terminology.

> +
>  void kvm_vcpu_wfi(struct kvm_vcpu *vcpu);
>  
>  void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
> index bef40ddb16db..d3bcda665612 100644
> --- a/arch/arm64/kvm/hyp/exception.c
> +++ b/arch/arm64/kvm/hyp/exception.c
> @@ -65,12 +65,25 @@ static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
>  		vcpu->arch.ctxt.spsr_und = val;
>  }
>  
> +unsigned long get_except64_offset(unsigned long psr, unsigned long target_mode,
> +				  enum exception_type type)
> +{
> +	u64 mode = psr & (PSR_MODE_MASK | PSR_MODE32_BIT);
> +	u64 exc_offset;
> +
> +	if      (mode == target_mode)
> +		exc_offset = CURRENT_EL_SP_ELx_VECTOR;
> +	else if ((mode | PSR_MODE_THREAD_BIT) == target_mode)
> +		exc_offset = CURRENT_EL_SP_EL0_VECTOR;
> +	else if (!(mode & PSR_MODE32_BIT))
> +		exc_offset = LOWER_EL_AArch64_VECTOR;
> +	else
> +		exc_offset = LOWER_EL_AArch32_VECTOR;
> +
> +	return exc_offset + type;
> +}
> +
>  /*
> - * This performs the exception entry at a given EL (@target_mode), stashing PC
> - * and PSTATE into ELR and SPSR respectively, and compute the new PC/PSTATE.
> - * The EL passed to this function *must* be a non-secure, privileged mode with
> - * bit 0 being set (PSTATE.SP == 1).
> - *
>   * When an exception is taken, most PSTATE fields are left unchanged in the
>   * handler. However, some are explicitly overridden (e.g. M[4:0]). Luckily all
>   * of the inherited bits have the same position in the AArch64/AArch32 SPSR_ELx
> @@ -82,50 +95,17 @@ static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
>   * Here we manipulate the fields in order of the AArch64 SPSR_ELx layout, from
>   * MSB to LSB.
>   */
> -static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
> -			      enum exception_type type)
> +unsigned long get_except64_cpsr(unsigned long old, bool has_mte,
> +				unsigned long sctlr, unsigned long target_mode)

I really dislike the has_mte and sctlr thing.

The main reason is that it will not scale as we end-up hiding more
feature on the host (think PM and FEAT_EBEP, for example). Even worse,
some bits are not necessarily sourced from PSTATE, nor SCTLR (EXLOCK
depends on GCSCR_ELx, for example).

I think you really need to turn this into something that is flexible
enough that it can work for both host and guest, with an actual
abstraction. It is likely to look like a list of register accessors to
get to the correct data.

But it could well be that open-coding it is the least horrid solution.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list