[PATCH] riscv: kvm: mmu: fix unlocked gstage unmap in kvm_unmap_gfn_range

cuitao cuitao at kylinos.cn
Thu Apr 16 02:18:05 PDT 2026


When spin_trylock fails in kvm_unmap_gfn_range(), the code still
proceeds to call kvm_riscv_gstage_unmap_range() without holding
kvm->mmu_lock, which races with concurrent page table modifications.

Skip the unmap and return false on trylock failure so the MMU notifier
will flush and retry, matching the pattern used by other architectures.

Signed-off-by: cuitao <cuitao at kylinos.cn>
---
 arch/riscv/kvm/mmu.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index 088d33ba90ed..d37cc717fdaf 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -245,7 +245,6 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
 {
 	struct kvm_gstage gstage;
-	bool mmu_locked;
 
 	if (!kvm->arch.pgd)
 		return false;
@@ -254,12 +253,13 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
 	gstage.flags = 0;
 	gstage.vmid = READ_ONCE(kvm->arch.vmid.vmid);
 	gstage.pgd = kvm->arch.pgd;
-	mmu_locked = spin_trylock(&kvm->mmu_lock);
+	if (!spin_trylock(&kvm->mmu_lock))
+		return false;
+
 	kvm_riscv_gstage_unmap_range(&gstage, range->start << PAGE_SHIFT,
 				     (range->end - range->start) << PAGE_SHIFT,
 				     range->may_block);
-	if (mmu_locked)
-		spin_unlock(&kvm->mmu_lock);
+	spin_unlock(&kvm->mmu_lock);
 	return false;
 }
 
-- 
2.43.0




More information about the linux-riscv mailing list