[PATCH v3 4/4] KVM: arm64: Clear active_vmids on vCPU schedule out

Shameerali Kolothum Thodi shameerali.kolothum.thodi at huawei.com
Sun Oct 10 23:06:30 PDT 2021



> -----Original Message-----
> From: Shameerali Kolothum Thodi
> Sent: 11 August 2021 09:48
> To: 'Will Deacon' <will at kernel.org>
> Cc: linux-arm-kernel at lists.infradead.org; kvmarm at lists.cs.columbia.edu;
> linux-kernel at vger.kernel.org; maz at kernel.org; catalin.marinas at arm.com;
> james.morse at arm.com; julien.thierry.kdev at gmail.com;
> suzuki.poulose at arm.com; jean-philippe at linaro.org;
> Alexandru.Elisei at arm.com; qperret at google.com; Linuxarm
> <linuxarm at huawei.com>
> Subject: RE: [PATCH v3 4/4] KVM: arm64: Clear active_vmids on vCPU
> schedule out
> 
> Hi Will,
> 
> > -----Original Message-----
> > From: Will Deacon [mailto:will at kernel.org]
> > Sent: 03 August 2021 16:31
> > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi at huawei.com>
> > Cc: linux-arm-kernel at lists.infradead.org; kvmarm at lists.cs.columbia.edu;
> > linux-kernel at vger.kernel.org; maz at kernel.org; catalin.marinas at arm.com;
> > james.morse at arm.com; julien.thierry.kdev at gmail.com;
> > suzuki.poulose at arm.com; jean-philippe at linaro.org;
> > Alexandru.Elisei at arm.com; qperret at google.com; Linuxarm
> > <linuxarm at huawei.com>
> > Subject: Re: [PATCH v3 4/4] KVM: arm64: Clear active_vmids on vCPU
> > schedule out
> 
> [...]
> 
> > I think we have to be really careful not to run into the "suspended
> > animation" problem described in ae120d9edfe9 ("ARM: 7767/1: let the ASID
> > allocator handle suspended animation") if we go down this road.
> >
> > Maybe something along the lines of:
> >
> > ROLLOVER
> >
> >   * Take lock
> >   * Inc generation
> >     => This will force everybody down the slow path
> >   * Record active VMIDs
> >   * Broadcast TLBI
> >     => Only active VMIDs can be dirty
> >     => Reserve active VMIDs and mark as allocated
> >
> > VCPU SCHED IN
> >
> >   * Set active VMID
> >   * Check generation
> >   * If mismatch then:
> >         * Take lock
> >         * Try to match a reserved VMID
> >         * If no reserved VMID, allocate new
> >
> > VCPU SCHED OUT
> >
> >   * Clear active VMID
> >
> > but I'm not daft enough to think I got it right first time. I think it
> > needs both implementing *and* modelling in TLA+ before we merge it!
> 
> I attempted to implement the above algo as below. It seems to be
> working in both 16-bit vmid and 4-bit vmid test setup. 

It is not :(. I did an extended, overnight test run and it fails.
It looks to me in my below implementation there is no synchronization
on setting the active VMID and a concurrent rollover. I will have another go.

Thanks,
Shameer

Though I am
> not quite sure this Is exactly what you had in mind above and covers
> all corner cases.
> 
> Please take a look and let me know.
> (The diff below is against this v3 series)
> 
> Thanks,
> Shameer
> 
> --->8<----
> 
> --- a/arch/arm64/kvm/vmid.c
> +++ b/arch/arm64/kvm/vmid.c
> @@ -43,7 +43,7 @@ static void flush_context(void)
>         bitmap_clear(vmid_map, 0, NUM_USER_VMIDS);
> 
>         for_each_possible_cpu(cpu) {
> -               vmid = atomic64_xchg_relaxed(&per_cpu(active_vmids,
> cpu), 0);
> +               vmid = atomic64_read(&per_cpu(active_vmids, cpu));
> 
>                 /* Preserve reserved VMID */
>                 if (vmid == 0)
> @@ -125,32 +125,17 @@ void kvm_arm_vmid_clear_active(void)
>  void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
>  {
>         unsigned long flags;
> -       u64 vmid, old_active_vmid;
> +       u64 vmid;
> 
>         vmid = atomic64_read(&kvm_vmid->id);
> -
> -       /*
> -        * Please refer comments in check_and_switch_context() in
> -        * arch/arm64/mm/context.c.
> -        */
> -       old_active_vmid = atomic64_read(this_cpu_ptr(&active_vmids));
> -       if (old_active_vmid && vmid_gen_match(vmid) &&
> -           atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_vmids),
> -                                    old_active_vmid, vmid))
> +       if (vmid_gen_match(vmid)) {
> +               atomic64_set(this_cpu_ptr(&active_vmids), vmid);
>                 return;
> -
> -       raw_spin_lock_irqsave(&cpu_vmid_lock, flags);
> -
> -       /* Check that our VMID belongs to the current generation. */
> -       vmid = atomic64_read(&kvm_vmid->id);
> -       if (!vmid_gen_match(vmid)) {
> -               vmid = new_vmid(kvm_vmid);
> -               atomic64_set(&kvm_vmid->id, vmid);
>         }
> 
> -
> +       raw_spin_lock_irqsave(&cpu_vmid_lock, flags);
> +       vmid = new_vmid(kvm_vmid);
> +       atomic64_set(&kvm_vmid->id, vmid);
>         atomic64_set(this_cpu_ptr(&active_vmids), vmid);
>         raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags);
>  }
> --->8<----
> 
> 
> 




More information about the linux-arm-kernel mailing list