[PATCH 06/10] KVM: arm64: Offer early resume for non-blocking WFxT instructions

Joey Gouly joey.gouly at arm.com
Wed Apr 13 04:37:26 PDT 2022


Hi Marc,

On Tue, Apr 12, 2022 at 02:12:59PM +0100, Marc Zyngier wrote:
> For WFxT instructions used with very small delays, it is not
> unlikely that the deadling is already expired by the time we

typo: deadline

> reach the WFx handling code.
> 
> Check for this condition as soon as possible, and return to the
> guest immediately if we can.
> 
> Signed-off-by: Marc Zyngier <maz at kernel.org>
> ---
>  arch/arm64/kvm/handle_exit.c | 25 ++++++++++++++++++++++---
>  1 file changed, 22 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 4260f2cd1971..87d9a36de860 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -80,17 +80,34 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
>   *
>   * @vcpu:	the vcpu pointer
>   *
> - * WFE: Yield the CPU and come back to this vcpu when the scheduler
> + * WFE[T]: Yield the CPU and come back to this vcpu when the scheduler
>   * decides to.
>   * WFI: Simply call kvm_vcpu_halt(), which will halt execution of
>   * world-switches and schedule other host processes until there is an
>   * incoming IRQ or FIQ to the VM.
>   * WFIT: Same as WFI, with a timed wakeup implemented as a background timer
> + *
> + * WF{I,E}T can immediately return if the deadline has already expired.
>   */
>  static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
>  {
>  	u64 esr = kvm_vcpu_get_esr(vcpu);
>  
> +	if (esr & ESR_ELx_WFx_ISS_WFxT) {
> +		if (esr & ESR_ELx_WFx_ISS_RV) {
> +			u64 val, now;
> +
> +			now = kvm_arm_timer_get_reg(vcpu, KVM_REG_ARM_TIMER_CNT);
> +			val = vcpu_get_reg(vcpu, kvm_vcpu_sys_get_rt(vcpu));
> +
> +			if (now >= val)
> +				goto out;

If this returns early, the trace_kvm_wfx and wfx_exit_stats below will not be
called / updated. Is this intentional?

> +		} else {
> +			/* Treat WFxT as WFx if RN is invalid */
> +			esr &= ~ESR_ELx_WFx_ISS_WFxT;
> +		}
> +	}
> +
>  	if (esr & ESR_ELx_WFx_ISS_WFE) {
>  		trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true);
>  		vcpu->stat.wfe_exit_stat++;
> @@ -98,11 +115,13 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
>  	} else {
>  		trace_kvm_wfx_arm64(*vcpu_pc(vcpu), false);
>  		vcpu->stat.wfi_exit_stat++;
> -		if ((esr & (ESR_ELx_WFx_ISS_RV | ESR_ELx_WFx_ISS_WFxT)) == (ESR_ELx_WFx_ISS_RV | ESR_ELx_WFx_ISS_WFxT))
> +
> +		if (esr & ESR_ELx_WFx_ISS_WFxT)
>  			vcpu->arch.flags |= KVM_ARM64_WFIT;
> +
>  		kvm_vcpu_wfi(vcpu);
>  	}
> -
> +out:
>  	kvm_incr_pc(vcpu);
>  
>  	return 1;

Thanks,
Joey



More information about the linux-arm-kernel mailing list