[RFC PATCH v2 3/8] KVM: arm64: Add some HW_DBM related pgtable interfaces

Shameerali Kolothum Thodi shameerali.kolothum.thodi at huawei.com
Tue Sep 26 08:52:19 PDT 2023



> -----Original Message-----
> From: Catalin Marinas [mailto:catalin.marinas at arm.com]
> Sent: 26 September 2023 16:20
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi at huawei.com>
> Cc: Oliver Upton <oliver.upton at linux.dev>; kvmarm at lists.linux.dev;
> kvm at vger.kernel.org; linux-arm-kernel at lists.infradead.org; maz at kernel.org;
> will at kernel.org; james.morse at arm.com; suzuki.poulose at arm.com;
> yuzenghui <yuzenghui at huawei.com>; zhukeqian
> <zhukeqian1 at huawei.com>; Jonathan Cameron
> <jonathan.cameron at huawei.com>; Linuxarm <linuxarm at huawei.com>
> Subject: Re: [RFC PATCH v2 3/8] KVM: arm64: Add some HW_DBM related
> pgtable interfaces
> 
> On Mon, Sep 25, 2023 at 08:04:39AM +0000, Shameerali Kolothum Thodi
> wrote:
> > From: Oliver Upton [mailto:oliver.upton at linux.dev]
> > > On Fri, Sep 22, 2023 at 04:24:11PM +0100, Catalin Marinas wrote:
> > > > I was wondering if this interferes with the OS dirty tracking (not the
> > > > KVM one) but I think that's ok, at least at this point, since the PTE is
> > > > already writeable and a fault would have marked the underlying page
> as
> > > > dirty (user_mem_abort() -> kvm_set_pfn_dirty()).
> > > >
> > > > I'm not particularly fond of relying on this but I need to see how it
> > > > fits with the rest of the series. IIRC KVM doesn't go around and make
> > > > Stage 2 PTEs read-only but rather unmaps them when it changes the
> > > > permission of the corresponding Stage 1 VMM mapping.
> > > >
> > > > My personal preference would be to track dirty/clean properly as we do
> > > > for stage 1 (e.g. DBM means writeable PTE) but it has some downsides
> > > > like the try_to_unmap() code having to retrieve the dirty state via
> > > > notifiers.
> > >
> > > KVM's usage of DBM is complicated by the fact that the dirty log
> > > interface w/ userspace is at PTE granularity. We only want the page
> > > table walker to relax PTEs, but take faults on hugepages so we can do
> > > page splitting.
> 
> Thanks for the clarification.
> 
> > > > > @@ -952,6 +990,11 @@ static int stage2_map_walker_try_leaf(const
> struct kvm_pgtable_visit_ctx *ctx,
> > > > >  	    stage2_pte_executable(new))
> > > > >  		mm_ops->icache_inval_pou(kvm_pte_follow(new, mm_ops),
> granule);
> > > > >
> > > > > +	/* Save the possible hardware dirty info */
> > > > > +	if ((ctx->level == KVM_PGTABLE_MAX_LEVELS - 1) &&
> > > > > +	    stage2_pte_writeable(ctx->old))
> > > > > +		mark_page_dirty(kvm_s2_mmu_to_kvm(pgt->mmu),
> ctx->addr >> PAGE_SHIFT);
> > > > > +
> > > > >  	stage2_make_pte(ctx, new);
> > > >
> > > > Isn't this racy and potentially losing the dirty state? Or is the 'new'
> > > > value guaranteed to have the S2AP[1] bit? For stage 1 we normally
> make
> > > > the page genuinely read-only (clearing DBM) in a cmpxchg loop to
> > > > preserve the dirty state (see ptep_set_wrprotect()).
> > >
> > > stage2_try_break_pte() a few lines up does a cmpxchg() and full
> > > break-before-make, so at this point there shouldn't be a race with
> > > either software or hardware table walkers.
> 
> Ah, I missed this. Also it was unrelated to this patch (or rather not
> introduced by this patch).
> 
> > > In both cases the 'old' translation should have DBM cleared. Even if the
> > > PTE were dirty, this is wasted work since we need to do a final scan of
> > > the stage-2 when userspace collects the dirty log.
> > >
> > > Am I missing something?
> >
> > I think we can get rid of the above mark_page_dirty(). I will test it to
> confirm
> > we are not missing anything here.
> 
> Is this the case for the other places of mark_page_dirty() in your
> patches? If stage2_pte_writeable() is true, it must have been made
> writeable earlier by a fault and the underlying page marked as dirty.
> 

One of the other place we have mark_page_dirty() is in the stage2_unmap_walker().
And during the testing of this series, I have tried to remove that and
found out that it actually causes memory corruption during VM migration.

>From my old debug logs:

[  399.288076]  stage2_unmap_walker+0x270/0x284
[  399.288078]  __kvm_pgtable_walk+0x1ec/0x2cc
[  399.288081]  __kvm_pgtable_walk+0xec/0x2cc
[  399.288084]  __kvm_pgtable_walk+0xec/0x2cc
[  399.288086]  kvm_pgtable_walk+0xcc/0x160
[  399.288088]  kvm_pgtable_stage2_unmap+0x4c/0xbc
[  399.288091]  stage2_apply_range+0xd0/0xec
[  399.288094]  __unmap_stage2_range+0x2c/0x60
[  399.288096]  kvm_unmap_gfn_range+0x30/0x48
[  399.288099]  kvm_mmu_notifier_invalidate_range_start+0xe0/0x264
[  399.288103]  __mmu_notifier_invalidate_range_start+0xa4/0x23c
[  399.288106]  change_protection+0x638/0x900
[  399.288109]  change_prot_numa+0x64/0xfc
[  399.288113]  task_numa_work+0x2ac/0x450
[  399.288117]  task_work_run+0x78/0xd0

It looks like that the unmap path gets triggered from Numa page migration code
path, so we may need it there.

Thanks,
Shameer



More information about the linux-arm-kernel mailing list