[RFC PATCH 24/37] KVM: x86/mmu: Move kvm_mmu_hugepage_adjust() up to fault handler
David Matlack
dmatlack at google.com
Thu Dec 8 11:38:44 PST 2022
Move the call to kvm_mmu_hugepage_adjust() up to the fault handler
rather than calling it from kvm_tdp_mmu_map(). Also make the same change
to direct_map() for consistency. This reduces the TDP MMU's dependency
on an x86-specific function, so that it can be moved into common code.
No functional change intended.
Signed-off-by: David Matlack <dmatlack at google.com>
---
arch/x86/kvm/mmu/mmu.c | 6 ++++--
arch/x86/kvm/mmu/tdp_mmu.c | 2 --
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 0593d4a60139..9307608ae975 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3151,8 +3151,6 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
int ret;
gfn_t base_gfn = fault->gfn;
- kvm_mmu_hugepage_adjust(vcpu, fault);
-
trace_kvm_mmu_spte_requested(fault);
for_each_shadow_entry(vcpu, fault->addr, it) {
/*
@@ -4330,6 +4328,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
if (r)
goto out_unlock;
+ kvm_mmu_hugepage_adjust(vcpu, fault);
+
r = direct_map(vcpu, fault);
out_unlock:
@@ -4408,6 +4408,8 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu,
if (is_page_fault_stale(vcpu, fault))
goto out_unlock;
+ kvm_mmu_hugepage_adjust(vcpu, fault);
+
r = kvm_tdp_mmu_map(vcpu, fault);
out_unlock:
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index b997f84c0ea7..e6708829714c 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1157,8 +1157,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
struct kvm_mmu_page *sp;
int ret = RET_PF_RETRY;
- kvm_mmu_hugepage_adjust(vcpu, fault);
-
trace_kvm_mmu_spte_requested(fault);
rcu_read_lock();
--
2.39.0.rc1.256.g54fd8350bd-goog
More information about the linux-riscv
mailing list