[PATCH v3 3/7] KVM: arm64: Handle DABT caused by LS64* instructions on unsupported memory
Yicong Yang
yangyicong at huawei.com
Thu Jun 26 04:39:41 PDT 2025
On 2025/6/26 16:51, Marc Zyngier wrote:
> On Thu, 26 Jun 2025 09:09:02 +0100,
> Yicong Yang <yangyicong at huawei.com> wrote:
>>
>> From: Yicong Yang <yangyicong at hisilicon.com>
>>
>> If FEAT_LS64WB not supported, FEAT_LS64* instructions only support
>> to access Device/Uncacheable memory, otherwise a data abort for
>> unsupported Exclusive or atomic access (0x35) is generated per spec.
>> It's implementation defined whether the target exception level is
>> routed and is possible to implemented as route to EL2 on a VHE VM
>> according to DDI0487K.a Section C3.2.12.2 Single-copy atomic 64-byte
>> load/store.
>
> Nit: in DDI0487L.b (the latest as I write), this is in C3.2.6.
>
will update the reference.
>>
>> If it's implemented as generate the DABT to the final enabled stage
>> (stage-2), since no valid ISV indicated in the ESR, it's better for
>> the userspace to decide how to handle it. Reuse the
>> NISV_IO_ABORT_TO_USER path with exit reason KVM_EXIT_ARM_LDST64B.
>>
>> Signed-off-by: Yicong Yang <yangyicong at hisilicon.com>
>> ---
>> arch/arm64/include/asm/esr.h | 8 ++++++++
>> arch/arm64/kvm/mmu.c | 21 ++++++++++++++++++++-
>> 2 files changed, 28 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
>> index e1deed824464..63cd17f830da 100644
>> --- a/arch/arm64/include/asm/esr.h
>> +++ b/arch/arm64/include/asm/esr.h
>> @@ -124,6 +124,7 @@
>> #define ESR_ELx_FSC_SEA_TTW(n) (0x14 + (n))
>> #define ESR_ELx_FSC_SECC (0x18)
>> #define ESR_ELx_FSC_SECC_TTW(n) (0x1c + (n))
>> +#define ESR_ELx_FSC_EXCL_ATOMIC (0x35)
>> #define ESR_ELx_FSC_ADDRSZ (0x00)
>>
>> /*
>> @@ -488,6 +489,13 @@ static inline bool esr_fsc_is_access_flag_fault(unsigned long esr)
>> (esr == ESR_ELx_FSC_ACCESS_L(0));
>> }
>>
>> +static inline bool esr_fsc_is_excl_atomic_fault(unsigned long esr)
>> +{
>> + esr = esr & ESR_ELx_FSC;
>> +
>> + return esr == ESR_ELx_FSC_EXCL_ATOMIC;
>> +}
>> +
>> static inline bool esr_fsc_is_addr_sz_fault(unsigned long esr)
>> {
>> esr &= ESR_ELx_FSC;
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 2942ec92c5a4..5f05d1c4b5a2 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -1665,6 +1665,24 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>> if (exec_fault && device)
>> return -ENOEXEC;
>>
>> + /*
>> + * Target address is normal memory on the Host. We come here
>> + * because:
>> + * 1) Guest map it as device memory and perform LS64 operations
>> + * 2) VMM report it as device memory mistakenly
>> + * Hand it to the userspace.
>> + */
>> + if (esr_fsc_is_excl_atomic_fault(kvm_vcpu_get_esr(vcpu))) {
>> + struct kvm_run *run = vcpu->run;
>> +
>> + run->exit_reason = KVM_EXIT_ARM_LDST64B;
>> + run->arm_nisv.esr_iss = kvm_vcpu_dabt_iss_nisv_sanitized(vcpu);
>> + run->arm_nisv.fault_ipa = fault_ipa |
>> + (kvm_vcpu_get_hfar(vcpu) & (vma_pagesize - 1));
>> +
>> + return -EAGAIN;
>> + }
>
> I'm not sure that's the right thing to do.
>
> If:
>
> - the guest was told it doesn't have LS64WB,
>
> - it was told that some range is memory,
>
> - it uses that range as device,
>
> - thanks to FWB the resulting memory type is "Normal-Cacheable"
>
> - which results in an Unsupported Atomic exception
>
> why would we involve the VMM at all? The VMM clearly said it didn't
> want to be involved in this (we have a memslot).
>
ok I thought we should make VMM do the decision in all the cases(both
here and emulated MMIO) based on the last discussion[*], I may
misunderstand it. If this is the case...
> I think we should simply inject the corresponding S1 fault back into
> the guest.
>
let's simply inject a corresponding DABT back here and only make the VMM
handle the emulated MMIO case. will update if no further comment.
thanks.
[*] https://lore.kernel.org/linux-arm-kernel/Z_NkHWStDJLo0cmY@linux.dev/
More information about the linux-arm-kernel
mailing list