[PATCH] KVM: arm64: Skip CMOs when updating a PTE pointing to non-memory
Marc Zyngier
maz at kernel.org
Mon Apr 26 11:36:05 BST 2021
Sumit Gupta and Krishna Reddy both reported that for MMIO regions
mapped into userspace using VFIO, a PTE update can trigger a MMU
notifier reaching kvm_set_spte_hva().
There is an assumption baked in kvm_set_spte_hva() that it only
deals with memory pages, and not MMIO. For this purpose, it
performs a cache cleaning of the potentially newly mapped page.
However, for a MMIO range, this explodes as there is no linear
mapping for this range (and doing cache maintenance on it would
make little sense anyway).
Check for the validity of the page before performing the CMO
addresses the problem.
Reported-by: Krishna Reddy <vdumpa at nvidia.com>
Reported-by: Sumit Gupta <sumitg at nvidia.com>,
Tested-by: Sumit Gupta <sumitg at nvidia.com>,
Signed-off-by: Marc Zyngier <maz at kernel.org>
Link: https://lore.kernel.org/r/5a8825bc-286e-b316-515f-3bd3c9c70a80@nvidia.com
---
arch/arm64/kvm/mmu.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index cd4d51ae3d4a..564a0f7fcd05 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1236,7 +1236,8 @@ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
* We've moved a page around, probably through CoW, so let's treat it
* just like a translation fault and clean the cache to the PoC.
*/
- clean_dcache_guest_page(pfn, PAGE_SIZE);
+ if (!kvm_is_device_pfn(pfn))
+ clean_dcache_guest_page(pfn, PAGE_SIZE);
handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &pfn);
return 0;
}
--
2.30.2
More information about the linux-arm-kernel
mailing list