[PATCH v2 01/22] KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap()

Gavin Shan gshan at redhat.com
Wed Aug 19 00:38:39 EDT 2020


Hi Will,

On 8/18/20 11:27 PM, Will Deacon wrote:
> kvm_phys_addr_ioremap() unconditionally empties out the memcache pages
> for the current vCPU on return. This causes subsequent topups to allocate
> fresh pages and is at odds with the behaviour when mapping memory in
> user_mem_abort().
> 
> Remove the call to kvm_mmu_free_memory_cache() from
> kvm_phys_addr_ioremap(), allowing the cached pages to be used by a later
> mapping.
> 
> Cc: Marc Zyngier <maz at kernel.org>
> Cc: Quentin Perret <qperret at google.com>
> Signed-off-by: Will Deacon <will at kernel.org>
> ---
>   arch/arm64/kvm/mmu.c | 6 ++----
>   1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 662b0c99a63d..4a24ebdc6fc6 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1489,19 +1489,17 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>   		ret = kvm_mmu_topup_memory_cache(&cache,
>   						 kvm_mmu_cache_min_pages(kvm));
>   		if (ret)
> -			goto out;
> +			break;
>   		spin_lock(&kvm->mmu_lock);
>   		ret = stage2_set_pte(&kvm->arch.mmu, &cache, addr, &pte,
>   				     KVM_S2PTE_FLAG_IS_IOMAP);
>   		spin_unlock(&kvm->mmu_lock);
>   		if (ret)
> -			goto out;
> +			break;
>   
>   		pfn++;
>   	}
>   
> -out:
> -	kvm_mmu_free_memory_cache(&cache);
>   	return ret;
>   }
>   

It seems incorrect. The cache is tracked by local variable (@cache),
meaning the cache is only visible to kvm_phys_addr_ioremap() and its
callee. So it's correct to free unused pages in two cases: (1) error
is returned (2) high level of page tables were previously populated
and not all pre-allocated pages are used. Otherwise, this leads to
memory leakage.

Thanks,
Gavin




More information about the linux-arm-kernel mailing list