[RFC PATCH 2/3] KVM: arm64: Fix handling of merging tables into a block entry
Will Deacon
will at kernel.org
Mon Nov 30 11:01:20 EST 2020
Hi,
Cheers for the quick reply. See below for more questions...
On Mon, Nov 30, 2020 at 11:24:19PM +0800, wangyanan (Y) wrote:
> On 2020/11/30 21:34, Will Deacon wrote:
> > On Mon, Nov 30, 2020 at 08:18:46PM +0800, Yanan Wang wrote:
> > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > > index 696b6aa83faf..fec8dc9f2baa 100644
> > > --- a/arch/arm64/kvm/hyp/pgtable.c
> > > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > > @@ -500,6 +500,9 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
> > > return 0;
> > > }
> > > +static void stage2_flush_dcache(void *addr, u64 size);
> > > +static bool stage2_pte_cacheable(kvm_pte_t pte);
> > > +
> > > static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> > > struct stage2_map_data *data)
> > > {
> > > @@ -507,9 +510,17 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> > > struct page *page = virt_to_page(ptep);
> > > if (data->anchor) {
> > > - if (kvm_pte_valid(pte))
> > > + if (kvm_pte_valid(pte)) {
> > > + kvm_set_invalid_pte(ptep);
> > > + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu,
> > > + addr, level);
> > > put_page(page);
> > This doesn't make sense to me: the page-table pages we're walking when the
> > anchor is set are not accessible to the hardware walker because we unhooked
> > the entire sub-table in stage2_map_walk_table_pre(), which has the necessary
> > TLB invalidation.
> >
> > Are you seeing a problem in practice here?
>
> Yes, I indeed find a problem in practice.
>
> When the migration was cancelled, a TLB conflic abort was found in guest.
>
> This problem is fixed before rework of the page table code, you can have a
> look in the following two links:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3c3736cd32bf5197aed1410ae826d2d254a5b277
>
> https://lists.cs.columbia.edu/pipermail/kvmarm/2019-March/035031.html
Ok, let's go through this, because I still don't see the bug. Please correct
me if you spot any mistakes:
1. We have a block mapping for X => Y
2. Dirty logging is enabled, so the block mapping is write-protected and
ends up being split into page mappings
3. Dirty logging is disabled due to a failed migration.
--- At this point, I think we agree that the state of the MMU is alright ---
4. We take a stage-2 fault and want to reinstall the block mapping:
a. kvm_pgtable_stage2_map() is invoked to install the block mapping
b. stage2_map_walk_table_pre() finds a table where we would like to
install the block:
i. The anchor is set to point at this entry
ii. The entry is made invalid
iii. We invalidate the TLB for the input address. This is
TLBI IPAS2SE1IS without level hint and then TLBI VMALLE1IS.
*** At this point, the page-table pointed to by the old table entry
is not reachable to the hardware walker ***
c. stage2_map_walk_leaf() is called for each leaf entry in the
now-unreachable subtree, dropping page-references for each valid
entry it finds.
d. stage2_map_walk_table_post() is eventually called for the entry
which we cleared back in b.ii, so we install the new block mapping.
You are proposing to add additional TLB invalidation to (c), but I don't
think that is necessary, thanks to the invalidation already performed in
b.iii. What am I missing here?
> > > + if (stage2_pte_cacheable(pte))
> > > + stage2_flush_dcache(kvm_pte_follow(pte),
> > > + kvm_granule_size(level));
> > I don't understand the need for the flush either, as we're just coalescing
> > existing entries into a larger block mapping.
>
> In my opinion, after unmapping, it is necessary to ensure the cache
> coherency, because it is unknown whether the future mapping memory attribute
> is changed or not (cacheable -> non_cacheable) theoretically.
But in this case we're just changing the structure of the page-tables,
not the pages which are mapped, are we?
Will
More information about the linux-arm-kernel
mailing list