[PATCH 04/11] KVM: arm64: Move PC rollback on SError to HYP

Mark Rutland mark.rutland at arm.com
Mon Oct 26 10:06:06 EDT 2020


On Mon, Oct 26, 2020 at 01:34:43PM +0000, Marc Zyngier wrote:
> Instead of handling the "PC rollback on SError during HVC" at EL1 (which
> requires disclosing PC to a potentially untrusted kernel), let's move
> this fixup to ... fixup_guest_exit(), which is where we do all fixups.
> 
> Isn't that neat?
> 
> Signed-off-by: Marc Zyngier <maz at kernel.org>

Acked-by: Mark Rutland <mark.rutland at arm.com>

Mark.

> ---
>  arch/arm64/kvm/handle_exit.c            | 17 -----------------
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 15 +++++++++++++++
>  2 files changed, 15 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index d4e00a864ee6..f79137ee4274 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -241,23 +241,6 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index)
>  {
>  	struct kvm_run *run = vcpu->run;
>  
> -	if (ARM_SERROR_PENDING(exception_index)) {
> -		u8 esr_ec = ESR_ELx_EC(kvm_vcpu_get_esr(vcpu));
> -
> -		/*
> -		 * HVC already have an adjusted PC, which we need to
> -		 * correct in order to return to after having injected
> -		 * the SError.
> -		 *
> -		 * SMC, on the other hand, is *trapped*, meaning its
> -		 * preferred return address is the SMC itself.
> -		 */
> -		if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64)
> -			*vcpu_pc(vcpu) -= 4;
> -
> -		return 1;
> -	}
> -
>  	exception_index = ARM_EXCEPTION_CODE(exception_index);
>  
>  	switch (exception_index) {
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index d687e574cde5..668f02c7b0b3 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -411,6 +411,21 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
>  		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
>  
> +	if (ARM_SERROR_PENDING(*exit_code)) {
> +		u8 esr_ec = kvm_vcpu_trap_get_class(vcpu);
> +
> +		/*
> +		 * HVC already have an adjusted PC, which we need to
> +		 * correct in order to return to after having injected
> +		 * the SError.
> +		 *
> +		 * SMC, on the other hand, is *trapped*, meaning its
> +		 * preferred return address is the SMC itself.
> +		 */
> +		if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64)
> +			*vcpu_pc(vcpu) -= 4;
> +	}
> +
>  	/*
>  	 * We're using the raw exception code in order to only process
>  	 * the trap if no SError is pending. We will come back to the
> -- 
> 2.28.0
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm at lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



More information about the linux-arm-kernel mailing list