[PATCH v2 1/3] RISC-V: KVM: Fix lost write protection on huge pages during dirty logging
wang.yechao255 at zte.com.cn
wang.yechao255 at zte.com.cn
Wed Mar 4 01:26:01 PST 2026
From: Wang Yechao <wang.yechao255 at zte.com.cn>
When enabling dirty log in small chunks (e.g., QEMU default chunk
size of 256K), the chunk size is always smaller than the page size
of huge pages (1G or 2M) used in the gstage page tables. This caused
the write protection to be incorrectly skipped for huge PTEs because
the condition `(end - addr) >= page_size` was not satisfied.
Remove the size check in `kvm_riscv_gstage_wp_range()` to ensure huge
PTEs are always write-protected regardless of the chunk size. Additionally,
explicitly align the address down to the page size before invoking
`kvm_riscv_gstage_op_pte()` to guarantee that the address passed to the
operation function is page-aligned.
This fixes the issue where dirty pages might not be tracked correctly
when using huge pages.
Signed-off-by: Wang Yechao <wang.yechao255 at zte.com.cn>
---
arch/riscv/kvm/gstage.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c
index b67d60d722c2..d2001d508046 100644
--- a/arch/riscv/kvm/gstage.c
+++ b/arch/riscv/kvm/gstage.c
@@ -304,10 +304,9 @@ void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa_t end
if (!found_leaf)
goto next;
- if (!(addr & (page_size - 1)) && ((end - addr) >= page_size))
- kvm_riscv_gstage_op_pte(gstage, addr, ptep,
- ptep_level, GSTAGE_OP_WP);
-
+ addr = ALIGN_DOWN(addr, page_size);
+ kvm_riscv_gstage_op_pte(gstage, addr, ptep,
+ ptep_level, GSTAGE_OP_WP);
next:
addr += page_size;
}
--
2.47.3
More information about the linux-riscv
mailing list