[RFC PATCH v2 2/7] KVM: arm64: Introduce support to pin VMIDs
Shameerali Kolothum Thodi
shameerali.kolothum.thodi at huawei.com
Mon Jun 24 12:34:42 PDT 2024
> -----Original Message-----
> From: Oliver Upton <oliver.upton at linux.dev>
> Sent: Monday, June 24, 2024 8:22 PM
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi at huawei.com>
> Cc: kvmarm at lists.linux.dev; iommu at lists.linux.dev; linux-arm-
> kernel at lists.infradead.org; Linuxarm <linuxarm at huawei.com>;
> kevin.tian at intel.com; jgg at ziepe.ca; alex.williamson at redhat.com;
> maz at kernel.org; will at kernel.org; robin.murphy at arm.com; jean-
> philippe at linaro.org; Jonathan Cameron <jonathan.cameron at huawei.com>
> Subject: Re: [RFC PATCH v2 2/7] KVM: arm64: Introduce support to pin
> VMIDs
>
> Hi Shameer,
>
> On Thu, Feb 08, 2024 at 03:18:32PM +0000, Shameer Kolothum wrote:
>
> [...]
>
> > +unsigned long kvm_arm_pinned_vmid_get(struct kvm_vmid *kvm_vmid)
> > +{
> > + unsigned long flags;
> > + u64 vmid;
> > +
> > + if (!pinned_vmid_map)
> > + return 0;
>
> Like I'd mentioned over in the other thread, returning 0 for error
> conditions is rather confusing unless one is staring at the VMID
> allocator code.
Agree. It gives the impression VMID 0 is valid.
>
> Can you rework this to return an actual error (e.g. -EINVAL) on failure?
Ok.
>
> > + raw_spin_lock_irqsave(&cpu_vmid_lock, flags);
> > +
> > + vmid = atomic64_read(&kvm_vmid->id);
> > +
> > + if (refcount_inc_not_zero(&kvm_vmid->pinned))
> > + goto out_unlock;
> > +
> > + if (nr_pinned_vmids >= max_pinned_vmids) {
> > + vmid = 0;
> > + goto out_unlock;
> > + }
>
> You need to decrement the refcount in this error path.
Hmm..do we? We are here means refcount is zero, right?
>
> > +
> > + /*
> > + * If we went through one or more rollover since that VMID was
> > + * used, make sure it is still valid, or generate a new one.
> > + */
> > + if (!vmid_gen_match(vmid))
> > + vmid = new_vmid(kvm_vmid);
> > +
> > + nr_pinned_vmids++;
> > + __set_bit(vmid2idx(vmid), pinned_vmid_map);
> > + refcount_set(&kvm_vmid->pinned, 1);
> > +
> > +out_unlock:
> > + raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags);
> > +
> > + vmid &= ~VMID_MASK;
> > +
> > + return vmid;
> > +}
> > +
> > +void kvm_arm_pinned_vmid_put(struct kvm_vmid *kvm_vmid)
> > +{
> > + unsigned long flags;
> > + u64 vmid = atomic64_read(&kvm_vmid->id);
> > +
> > + if (!pinned_vmid_map)
> > + return;
> > +
> > + raw_spin_lock_irqsave(&cpu_vmid_lock, flags);
> > +
> > + if (refcount_dec_and_test(&kvm_vmid->pinned)) {
> > + __clear_bit(vmid2idx(vmid), pinned_vmid_map);
> > + nr_pinned_vmids--;
> > + }
> > +
> > + raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags);
> > +}
> > +
> > /*
> > * Initialize the VMID allocator
> > */
> > @@ -191,10 +263,20 @@ int __init kvm_arm_vmid_alloc_init(void)
> > if (!vmid_map)
> > return -ENOMEM;
> >
> > + pinned_vmid_map = bitmap_zalloc(NUM_USER_VMIDS,
> GFP_KERNEL);
> > + nr_pinned_vmids = 0;
> > +
> > + /*
> > + * Ensure we have at least one emty slot available after rollover
>
> typo: empty
Thanks,
Shameer
More information about the linux-arm-kernel
mailing list