LM regression: fce886a60207 KVM: arm64: Plumb the pKVM MMU in KVM

Marc Zyngier maz at kernel.org
Mon May 5 09:26:31 PDT 2025


On Mon, 05 May 2025 17:05:08 +0100,
Sebastian Ott <sebott at redhat.com> wrote:
> 
> On Mon, 5 May 2025, Marc Zyngier wrote:
> > On Mon, 05 May 2025 15:01:24 +0100,
> > Sebastian Ott <sebott at redhat.com> wrote:
> >> 
> >> On Mon, 5 May 2025, Marc Zyngier wrote:
> >>> On Mon, 05 May 2025 11:52:00 +0100,
> >>> Sebastian Ott <sebott at redhat.com> wrote:
> >>>> Doing back and forth migrations currently fails on arm after a couple iterations.
> >>>> During the failing migration KVM_RUN exits via guest_abort and returns -ENOMEM.
> >>>> I can reliably reproduce this by migrating between 2 qemu instances on an ampere
> >>>> altra machine. This fails after < 5 iterations. In this case qemu would spit out
> >>>> smth like this (other than that - nothing in the logs):
> >>>> 
> >>>> error: kvm run failed Cannot allocate memory
> >>>>  PC=0000aaaae7d48590 X00=0000aaaae80a2e00 X01=0000aaaae7ea2fc0
> >>>> X02=0000000001d3a5d0 X03=0000aaaae7eace8c X04=000000003b9aca00
> >>>> X05=000000000000004a X06=000000000000004a X07=0000000028000000
> >>>> X08=0000000000001d70 X09=0000000000000018 X10=000144b7d0000000
> >>>> X11=00ffffffffffffff X12=000000008378f367 X13=0000aaab1a202d70
> >>>> X14=0000000000000000 X15=0000000000000000 X16=0000ffffa2e2f7a8
> >>>> X17=0000ffffa2541f20 X18=000000000000a000 X19=84bfda6288cf2dd6
> >>>> X20=0000aaab1a1f1ce0 X21=000000007fffffff X22=0000ffffc5431788
> >>>> X23=0000aaab1a17db60 X24=0000ffffc5431770 X25=0000000100000000
> >>>> X26=0000004100000000 X27=0000000000000001 X28=0000aaab1a1f1c20
> >>>> X29=0000ffffc54316d0 X30=0000aaaae7f8cd24  SP=0000ffffc5431650
> >>>> PSTATE=20001000 --C- EL0t
> >>>> 
> >>>> Guest and host are otherwise idle, kvm is in normal VHE mode.
> >>>> 
> >>>> Git bisect points to (fce886a60207 "KVM: arm64: Plumb the pKVM MMU in KVM")
> >>>> I also double checked that by reverting this commit on top of 6.14.
> >>> 
> >>> Thanks for find the triggering commit. Can you further identify *what*
> >>> causes the -ENOMEM? The only new -ENOMEM in that patch is the one
> >>> added to topup_hyp_memcache(), which shouldn't be called.
> >> 
> >> The kvm_pgtable_stage2_map() call in user_mem_abort() returns -ENOMEM
> >> because the memcache pointer was not initialized!
> >> 
> >> It looks like smth like this without other conditions could do the trick:
> >> 
> >> if (!is_protected_kvm_enabled())
> >> 	memcache = &vcpu->arch.mmu_page_cache;
> >> else
> >> 	memcache = &vcpu->arch.pkvm_memcache;
> >> 
> >> (I'll try this now)
> > 
> > Right, we end-up with an uninitialised memcache variable. Why isn't
> > the compiler screaming?
> 
> Yea. Also I was under the impression that these kind of warnings tend to
> over indicate..

Quite. The obvious conclusion is that compilers are crap. Oh well.

> 
> > I think you can indeed simply hoist the init of memcache very early
> > on, which should solve hopefully solve the issue.
> 
> It solves the issue for me. Please note that in this case topup cache is
> not called - I hope that this is not an issue (but it was also the case
> before commit fce886a60207).

Indeed, and that's how I expect it to fail. We're also pretty lucky
that we get a NULL instead of some random pointer...

> From c594bbf9c3c4186594b798734ea9b1779be3b584 Mon Sep 17 00:00:00 2001
> From: Sebastian Ott <sebott at redhat.com>
> Date: Mon, 5 May 2025 11:09:58 -0400
> Subject: [PATCH] KVM: arm64: Fix uninitialized memcache pointer in user_mem_abort()
> 
> Commit fce886a60207 ("KVM: arm64: Plumb the pKVM MMU in KVM") made the
> initialization of the local memcache variable in user_mem_abort()
> conditional, leaving a codepath where it is used uninitialized via
> kvm_pgtable_stage2_map().
> 
> This can lead to migration failures where KVM_RUN exits with -ENOMEM.

Not only. It can fail on any path that requires a stage-2 allocation
without transition via a permission fault or logging. It is actually
pretty amazing that it fails so rarely...

> Fix this by making sure that memcache is always valid.
> 
> Fixes: fce886a60207 ("KVM: arm64: Plumb the pKVM MMU in KVM")
> Signed-off-by: Sebastian Ott <sebott at redhat.com>

+ Cc: stable at vger.kernel.org

> ---
>  arch/arm64/kvm/mmu.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 754f2fe0cc67..6c3c658c5f29 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1501,6 +1501,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		return -EFAULT;
>  	}
> 
> +	if (!is_protected_kvm_enabled())
> +		memcache = &vcpu->arch.mmu_page_cache;
> +	else
> +		memcache = &vcpu->arch.pkvm_memcache;
> +
>  	/*
>  	 * Permission faults just need to update the existing leaf entry,
>  	 * and so normally don't require allocations from the memcache. The
> @@ -1511,10 +1516,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu);
> 
>  		if (!is_protected_kvm_enabled()) {
> -			memcache = &vcpu->arch.mmu_page_cache;
>  			ret = kvm_mmu_topup_memory_cache(memcache, min_pages);
>  		} else {
> -			memcache = &vcpu->arch.pkvm_memcache;

Feel free to drop the {}s that are now superfluous.

>  			ret = topup_hyp_memcache(memcache, min_pages);
>  		}
>  		if (ret)
> 

With the above addressed:

Reviewed-by: Marc Zyngier <maz at kernel.org>

Please post it as a standalone patch so that Oliver can add it to the
next batch of fixes for 6.15.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list