[PATCH v1 05/16] iommufd/viommu: Add IOMMU_VIOMMU_SET/UNSET_VDEV_ID ioctl
Nicolin Chen
nicolinc at nvidia.com
Thu Aug 15 12:53:04 PDT 2024
On Thu, Aug 15, 2024 at 12:46:29PM -0700, Nicolin Chen wrote:
> > > +static struct device *
> > > +iommufd_viommu_find_device(struct iommufd_viommu *viommu, u64 id)
> > > +{
> > > + struct iommufd_vdev_id *vdev_id;
> > > +
> > > + xa_lock(&viommu->vdev_ids);
> > > + vdev_id = xa_load(&viommu->vdev_ids, (unsigned long)id);
> > > + xa_unlock(&viommu->vdev_ids);
> >
> > This lock doesn't do anything
> >
> > > + if (!vdev_id || vdev_id->vdev_id != id)
> > > + return NULL;
> >
> > And this is unlocked
> >
> > > + return vdev_id->dev;
> > > +}
> >
> > This isn't good.. We can't return the struct device pointer here as
> > there is no locking for it anymore. We can't even know it is still
> > probed to VFIO anymore.
> >
> > It has to work by having the iommu driver directly access the xarray
> > and the entirely under the spinlock the iommu driver can translate the
> > vSID to the pSID and the let go and push the invalidation to HW. No
> > races.
>
> Maybe the iommufd_viommu_invalidate ioctl handler should hold that
> xa_lock around the viommu->ops->cache_invalidate, and then add lock
> assert in iommufd_viommu_find_device?
xa_lock/spinlock might be too heavy. We can have a mutex to wrap
around viommu ioctl handlers..
More information about the linux-arm-kernel
mailing list