[PATCH v3 06/15] KVM: arm64: Add support for KVM userfault exits
James Houghton
jthoughton at google.com
Tue Jun 17 21:24:15 PDT 2025
To support KVM userfault exits with arm64:
1. Force mappings to be 4K while KVM_MEM_USERFAULT is enabled.
2. Return -EFAULT when kvm_do_userfault() reports that the page is
userfault (or that reading the bitmap failed).
kvm_arch_commit_memory_region() was written assuming that, for
KVM_MR_FLAGS_ONLY changes, KVM_MEM_LOG_DIRTY_PAGES must be being
toggled. This is no longer the case, so adjust the logic appropriately.
Signed-off-by: James Houghton <jthoughton at google.com>
Signed-off-by: Sean Christopherson <seanjc at google.com>
---
arch/arm64/kvm/mmu.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 0c209f2e1c7b2..d75a6685d6842 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1548,7 +1548,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* logging_active is guaranteed to never be true for VM_PFNMAP
* memslots.
*/
- if (logging_active) {
+ if (logging_active || is_protected_kvm_enabled() ||
+ kvm_is_userfault_memslot(memslot)) {
force_pte = true;
vma_shift = PAGE_SHIFT;
} else {
@@ -1637,6 +1638,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
mmu_seq = vcpu->kvm->mmu_invalidate_seq;
mmap_read_unlock(current->mm);
+ if (kvm_do_userfault(vcpu, &fault))
+ return -EFAULT;
+
pfn = __kvm_faultin_pfn(memslot, fault.gfn, fault.write ? FOLL_WRITE : 0,
&writable, &page);
if (pfn == KVM_PFN_ERR_HWPOISON) {
@@ -2134,15 +2138,19 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
const struct kvm_memory_slot *new,
enum kvm_mr_change change)
{
- bool log_dirty_pages = new && new->flags & KVM_MEM_LOG_DIRTY_PAGES;
+ u32 old_flags = old ? old->flags : 0;
+ u32 new_flags = new ? new->flags : 0;
+
+ /* Nothing to do if not toggling dirty logging. */
+ if (!((old_flags ^ new_flags) & KVM_MEM_LOG_DIRTY_PAGES))
+ return;
/*
* At this point memslot has been committed and there is an
* allocated dirty_bitmap[], dirty pages will be tracked while the
* memory slot is write protected.
*/
- if (log_dirty_pages) {
-
+ if (new_flags & KVM_MEM_LOG_DIRTY_PAGES) {
if (change == KVM_MR_DELETE)
return;
--
2.50.0.rc2.692.g299adb8693-goog
More information about the linux-arm-kernel
mailing list