[PATCH v2 2/3] KVM: arm64: Fix handling of merging tables into a block entry
Yanan Wang
wangyanan55 at huawei.com
Tue Dec 1 15:10:33 EST 2020
In dirty logging case(logging_active == True), we need to collapse a
block entry into a table if necessary. After dirty logging is canceled,
when merging tables back into a block entry, we should not only free
the non-huge page-table pages but also invalidate all the TLB entries of
non-huge mappings for the block. Without enough TLBI, multiple TLB entries
for the memory in the block will be cached.
Signed-off-by: Will Deacon <will at kernel.org>
Signed-off-by: Yanan Wang <wangyanan55 at huawei.com>
---
arch/arm64/kvm/hyp/pgtable.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index b232bdd142a6..23a01dfcb27a 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -496,7 +496,13 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
return 0;
kvm_set_invalid_pte(ptep);
- kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, 0);
+
+ /*
+ * Invalidate the whole stage-2, as we may have numerous leaf
+ * entries below us which would otherwise need invalidating
+ * individually.
+ */
+ kvm_call_hyp(__kvm_tlb_flush_vmid, data->mmu);
data->anchor = ptep;
return 0;
}
--
2.19.1
More information about the linux-arm-kernel
mailing list