[PATCH 03/18] KVM: arm64: Handle trapping of FEAT_LS64* instructions
Marc Zyngier
maz at kernel.org
Tue Mar 4 07:47:11 PST 2025
On Tue, 04 Mar 2025 14:36:19 +0000,
Fuad Tabba <tabba at google.com> wrote:
>
> Hi Marc,
>
> On Mon, 10 Feb 2025 at 18:42, Marc Zyngier <maz at kernel.org> wrote:
> >
> > We generally don't expect FEAT_LS64* instructions to trap, unless
> > they are trapped by a guest hypervisor.
> >
> > Otherwise, this is just the guest playing tricks on us by using
> > an instruction that isn't advertised, which we handle with a well
> > deserved UNDEF.
> >
> > Signed-off-by: Marc Zyngier <maz at kernel.org>
> > ---
> > arch/arm64/kvm/handle_exit.c | 64 ++++++++++++++++++++++++++++++++++++
> > 1 file changed, 64 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> > index 512d152233ff2..4f8354bf7dc5f 100644
> > --- a/arch/arm64/kvm/handle_exit.c
> > +++ b/arch/arm64/kvm/handle_exit.c
> > @@ -294,6 +294,69 @@ static int handle_svc(struct kvm_vcpu *vcpu)
> > return 1;
> > }
> >
> > +static int handle_ls64b(struct kvm_vcpu *vcpu)
> > +{
> > + struct kvm *kvm = vcpu->kvm;
> > + u64 esr = kvm_vcpu_get_esr(vcpu);
> > + u64 iss = ESR_ELx_ISS(esr);
> > + bool allowed;
> > +
> > + switch (iss) {
> > + case ESR_ELx_ISS_ST64BV:
> > + allowed = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V);
> > + break;
> > + case ESR_ELx_ISS_ST64BV0:
> > + allowed = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA);
> > + break;
> > + case ESR_ELx_ISS_LDST64B:
> > + allowed = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64);
> > + break;
> > + default:
> > + /* Clearly, we're missing something. */
> > + goto unknown_trap;
> > + }
> > +
> > + if (!allowed)
> > + goto undef;
> > +
> > + if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
> > + u64 hcrx = __vcpu_sys_reg(vcpu, HCRX_EL2);
> > + bool fwd;
> > +
> > + switch (iss) {
> > + case ESR_ELx_ISS_ST64BV:
> > + fwd = !(hcrx & HCRX_EL2_EnASR);
> > + break;
> > + case ESR_ELx_ISS_ST64BV0:
> > + fwd = !(hcrx & HCRX_EL2_EnAS0);
> > + break;
> > + case ESR_ELx_ISS_LDST64B:
> > + fwd = !(hcrx & HCRX_EL2_EnALS);
> > + break;
> > + default:
> > + /* We don't expect to be here */
> > + fwd = false;
> > + }
> > +
> > + if (fwd) {
> > + kvm_inject_nested_sync(vcpu, esr);
> > + return 1;
> > + }
> > + }
> > +
> > +unknown_trap:
> > + /*
> > + * If we land here, something must be very wrong, because we
> > + * have no idea why we trapped at all. Warn and undef as a
> > + * fallback.
> > + */
> > + WARN_ON(1);
>
> nit: should this be WARN_ONCE() instead?
>
> > +
> > +undef:
> > + kvm_inject_undefined(vcpu);
> > + return 1;
> > +}
>
> I'm wondering if this can be simplified by having one switch()
> statement that toggles both allowed and fwd (or maybe even only fwd),
> and then inject depending on that, e.g.,
>
> +static int handle_ls64b(struct kvm_vcpu *vcpu)
> +{
> + struct kvm *kvm = vcpu->kvm;
> + bool is_nv = vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu);
> + u64 hcrx = __vcpu_sys_reg(vcpu, HCRX_EL2);
> + u64 esr = kvm_vcpu_get_esr(vcpu);
> + u64 iss = ESR_ELx_ISS(esr);
> + bool fwd = false;
> +
> + switch (iss) {
> + case ESR_ELx_ISS_ST64BV:
> + fwd = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V) &&
> + !(hcrx & HCRX_EL2_EnASR)
Ah, I know what I dislike about this approach: If your L1 guest runs
at EL2, HCRX_EL2 doesn't apply (it is only for an L2 guest). Yet we
evaluate it.
I think this still works because you shouldn't have HCRX_EL2.EnASR
clear on the host if LS64V is advertised to the guest, but it feels a
bit fragile.
I'll have another think at refactoring that code.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
More information about the linux-arm-kernel
mailing list