[PATCH v2 01/22] KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap()
Will Deacon
will at kernel.org
Wed Aug 19 05:03:20 EDT 2020
Hi Gavin,
Cheers for taking a look.
On Wed, Aug 19, 2020 at 02:38:39PM +1000, Gavin Shan wrote:
> On 8/18/20 11:27 PM, Will Deacon wrote:
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 662b0c99a63d..4a24ebdc6fc6 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1489,19 +1489,17 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
> > ret = kvm_mmu_topup_memory_cache(&cache,
> > kvm_mmu_cache_min_pages(kvm));
> > if (ret)
> > - goto out;
> > + break;
> > spin_lock(&kvm->mmu_lock);
> > ret = stage2_set_pte(&kvm->arch.mmu, &cache, addr, &pte,
> > KVM_S2PTE_FLAG_IS_IOMAP);
> > spin_unlock(&kvm->mmu_lock);
> > if (ret)
> > - goto out;
> > + break;
> > pfn++;
> > }
> > -out:
> > - kvm_mmu_free_memory_cache(&cache);
> > return ret;
> > }
>
> It seems incorrect. The cache is tracked by local variable (@cache),
> meaning the cache is only visible to kvm_phys_addr_ioremap() and its
> callee. So it's correct to free unused pages in two cases: (1) error
> is returned (2) high level of page tables were previously populated
> and not all pre-allocated pages are used. Otherwise, this leads to
> memory leakage.
Well spotted, you're completely right. I was _sure_ this was the vCPU
memcache and I even said as much in the commit meesage, but it's not, and it
never was, so I can drop this patch. If there are any other patches I can
drop in the series, please let me know! I did test with kmemleak enabled,
but I guess that doesn't track page allocations into the memcache.
It _might_ be an idea to have a per-VM memcache to handle these allocations,
as that might offer some reuse over sticking one on the stack each time, but
then again kvm_phys_addr_ioremap() is hardly a fastpath.
Will
More information about the linux-arm-kernel
mailing list