[PATCH v3 3/7] KVM: arm64: Handle DABT caused by LS64* instructions on unsupported memory

Yicong Yang yangyicong at huawei.com
Thu Jun 26 01:09:02 PDT 2025


From: Yicong Yang <yangyicong at hisilicon.com>

If FEAT_LS64WB not supported, FEAT_LS64* instructions only support
to access Device/Uncacheable memory, otherwise a data abort for
unsupported Exclusive or atomic access (0x35) is generated per spec.
It's implementation defined whether the target exception level is
routed and is possible to implemented as route to EL2 on a VHE VM
according to DDI0487K.a Section C3.2.12.2 Single-copy atomic 64-byte
load/store.

If it's implemented as generate the DABT to the final enabled stage
(stage-2), since no valid ISV indicated in the ESR, it's better for
the userspace to decide how to handle it. Reuse the
NISV_IO_ABORT_TO_USER path with exit reason KVM_EXIT_ARM_LDST64B.

Signed-off-by: Yicong Yang <yangyicong at hisilicon.com>
---
 arch/arm64/include/asm/esr.h |  8 ++++++++
 arch/arm64/kvm/mmu.c         | 21 ++++++++++++++++++++-
 2 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index e1deed824464..63cd17f830da 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -124,6 +124,7 @@
 #define ESR_ELx_FSC_SEA_TTW(n)	(0x14 + (n))
 #define ESR_ELx_FSC_SECC	(0x18)
 #define ESR_ELx_FSC_SECC_TTW(n)	(0x1c + (n))
+#define ESR_ELx_FSC_EXCL_ATOMIC	(0x35)
 #define ESR_ELx_FSC_ADDRSZ	(0x00)
 
 /*
@@ -488,6 +489,13 @@ static inline bool esr_fsc_is_access_flag_fault(unsigned long esr)
 	       (esr == ESR_ELx_FSC_ACCESS_L(0));
 }
 
+static inline bool esr_fsc_is_excl_atomic_fault(unsigned long esr)
+{
+	esr = esr & ESR_ELx_FSC;
+
+	return esr == ESR_ELx_FSC_EXCL_ATOMIC;
+}
+
 static inline bool esr_fsc_is_addr_sz_fault(unsigned long esr)
 {
 	esr &= ESR_ELx_FSC;
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 2942ec92c5a4..5f05d1c4b5a2 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1665,6 +1665,24 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (exec_fault && device)
 		return -ENOEXEC;
 
+	/*
+	 * Target address is normal memory on the Host. We come here
+	 * because:
+	 * 1) Guest map it as device memory and perform LS64 operations
+	 * 2) VMM report it as device memory mistakenly
+	 * Hand it to the userspace.
+	 */
+	if (esr_fsc_is_excl_atomic_fault(kvm_vcpu_get_esr(vcpu))) {
+		struct kvm_run *run = vcpu->run;
+
+		run->exit_reason = KVM_EXIT_ARM_LDST64B;
+		run->arm_nisv.esr_iss = kvm_vcpu_dabt_iss_nisv_sanitized(vcpu);
+		run->arm_nisv.fault_ipa = fault_ipa |
+			(kvm_vcpu_get_hfar(vcpu) & (vma_pagesize - 1));
+
+		return -EAGAIN;
+	}
+
 	/*
 	 * Potentially reduce shadow S2 permissions to match the guest's own
 	 * S2. For exec faults, we'd only reach this point if the guest
@@ -1850,7 +1868,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
 	/* Check the stage-2 fault is trans. fault or write fault */
 	if (!esr_fsc_is_translation_fault(esr) &&
 	    !esr_fsc_is_permission_fault(esr) &&
-	    !esr_fsc_is_access_flag_fault(esr)) {
+	    !esr_fsc_is_access_flag_fault(esr) &&
+	    !esr_fsc_is_excl_atomic_fault(esr)) {
 		kvm_err("Unsupported FSC: EC=%#x xFSC=%#lx ESR_EL2=%#lx\n",
 			kvm_vcpu_trap_get_class(vcpu),
 			(unsigned long)kvm_vcpu_trap_get_fault(vcpu),
-- 
2.24.0




More information about the linux-arm-kernel mailing list