[PATCH v1] KVM: arm64: nv: Use kvm_phys_size() for VNCR invalidation range

Marc Zyngier maz at kernel.org
Mon Feb 2 06:45:39 PST 2026


On Mon, 02 Feb 2026 13:04:24 +0000,
Fuad Tabba <tabba at google.com> wrote:
> 
> KVM: arm64: nv: Use kvm_phys_size() for VNCR invalidation range
> 
> Protected mode uses `pkvm_mappings` of the union inside `struct kvm_pgtable`.
> This aliases `ia_bits`, which is used in non-protected mode.
> 
> Attempting to use `pgt->ia_bits` in kvm_nested_s2_unmap() and
> kvm_nested_s2_wp() results in reading mapping pointers or state as a
> shift amount. This triggers a UBSAN shift-out-of-bounds error:
> 
>     UBSAN: shift-out-of-bounds in arch/arm64/kvm/nested.c:1127:34
>     shift exponent 174565952 is too large for 64-bit type 'unsigned long'
>     Call trace:
>      __ubsan_handle_shift_out_of_bounds+0x28c/0x2c0
>      kvm_nested_s2_unmap+0x228/0x248
>      kvm_arch_flush_shadow_memslot+0x98/0xc0
>      kvm_set_memslot+0x248/0xce0
> 
> Fix this by using kvm_phys_size() to determine the IPA size. This helper
> is independent of the software page table representation and works
> correctly for both protected and non-protected modes, as it derives the
> size directly from VTCR_EL2.

I'm a bit confused by the explanation. We have plenty of code that
uses pgt->ia_bits outside of the NV code. And yet that code is not
affected by this?

I'm asking because NV is clearly a case where the pkvm_mappings
aliasing is unambiguously *not* happening.

Isn't the real issue that we are entering the NV handling code for any
S2 manipulation irrespective of NV support? Would something like below
help instead?

Thanks,

	M.

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index cdeeb8f09e722..d03e9b71bf6cd 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1101,6 +1101,9 @@ void kvm_nested_s2_wp(struct kvm *kvm)
 
 	lockdep_assert_held_write(&kvm->mmu_lock);
 
+	if (!kvm->arch.nested_mmus_size)
+		return;
+
 	for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
 		struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
 
@@ -1117,6 +1120,9 @@ void kvm_nested_s2_unmap(struct kvm *kvm, bool may_block)
 
 	lockdep_assert_held_write(&kvm->mmu_lock);
 
+	if (!kvm->arch.nested_mmus_size)
+		return;
+
 	for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
 		struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
 
@@ -1133,6 +1139,9 @@ void kvm_nested_s2_flush(struct kvm *kvm)
 
 	lockdep_assert_held_write(&kvm->mmu_lock);
 
+	if (!kvm->arch.nested_mmus_size)
+		return;
+
 	for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
 		struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
 
@@ -1145,6 +1154,9 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm)
 {
 	int i;
 
+	if (!kvm->arch.nested_mmus_size)
+		return;
+
 	for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
 		struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
 

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list