[PATCH v6 14/15] KVM: arm64: Fold redundant exit code checks out of fixup_guest_exit()
Dave Martin
Dave.Martin at arm.com
Tue May 8 09:44:59 PDT 2018
The entire tail of fixup_guest_exit() is contained in if statements
of the form if (x && *exit_code == ARM_EXCEPTION_TRAP). As a result,
we can check just once and bail out of the function early, allowing
the remaining if conditions to be simplified.
The only awkward case is where *exit_code is changed to
ARM_EXCEPTION_EL1_SERROR in the case of an illegal GICv2 CPU
interface access: in that case, the GICv3 trap handling code is
skipped using a goto. This avoids pointlessly evaluating the
static branch check for the GICv3 case, even though we can't have
vgic_v2_cpuif_trap and vgic_v3_cpuif_trap true simultaneously
unless we have a GICv3 and GICv2 on the host: that sounds stupid,
but I haven't satisfied myself that it can't happen.
No functional change.
Signed-off-by: Dave Martin <Dave.Martin at arm.com>
---
Changes since v5:
Requested by Marc Zyngier:
* Move "goto exit" to the end of the data abort handling code under
gicv2 cpuif trap handling, since the gicv3-specific handling is only
for sysreg traps.
---
arch/arm64/kvm/hyp/switch.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 21e5543..5be472d 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -386,11 +386,13 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
* same PC once the SError has been injected, and replay the
* trapping instruction.
*/
- if (*exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
+ if (*exit_code != ARM_EXCEPTION_TRAP)
+ goto exit;
+
+ if (!__populate_fault_info(vcpu))
return true;
- if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
- *exit_code == ARM_EXCEPTION_TRAP) {
+ if (static_branch_unlikely(&vgic_v2_cpuif_trap)) {
bool valid;
valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW &&
@@ -416,11 +418,12 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
*vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS;
*exit_code = ARM_EXCEPTION_EL1_SERROR;
}
+
+ goto exit;
}
}
if (static_branch_unlikely(&vgic_v3_cpuif_trap) &&
- *exit_code == ARM_EXCEPTION_TRAP &&
(kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 ||
kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) {
int ret = __vgic_v3_perform_cpuif_access(vcpu);
@@ -429,6 +432,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
return true;
}
+exit:
/* Return to the host kernel and handle the exit */
return false;
}
--
2.1.4
More information about the linux-arm-kernel
mailing list