[PATCH v2 01/22] KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap()

Will Deacon will at kernel.org
Tue Aug 18 09:27:57 EDT 2020


kvm_phys_addr_ioremap() unconditionally empties out the memcache pages
for the current vCPU on return. This causes subsequent topups to allocate
fresh pages and is at odds with the behaviour when mapping memory in
user_mem_abort().

Remove the call to kvm_mmu_free_memory_cache() from
kvm_phys_addr_ioremap(), allowing the cached pages to be used by a later
mapping.

Cc: Marc Zyngier <maz at kernel.org>
Cc: Quentin Perret <qperret at google.com>
Signed-off-by: Will Deacon <will at kernel.org>
---
 arch/arm64/kvm/mmu.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 662b0c99a63d..4a24ebdc6fc6 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1489,19 +1489,17 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 		ret = kvm_mmu_topup_memory_cache(&cache,
 						 kvm_mmu_cache_min_pages(kvm));
 		if (ret)
-			goto out;
+			break;
 		spin_lock(&kvm->mmu_lock);
 		ret = stage2_set_pte(&kvm->arch.mmu, &cache, addr, &pte,
 				     KVM_S2PTE_FLAG_IS_IOMAP);
 		spin_unlock(&kvm->mmu_lock);
 		if (ret)
-			goto out;
+			break;
 
 		pfn++;
 	}
 
-out:
-	kvm_mmu_free_memory_cache(&cache);
 	return ret;
 }
 
-- 
2.28.0.220.ged08abb693-goog




More information about the linux-arm-kernel mailing list