[PATCH v2 1/2] vfio/iommu_type1: Populate full dirty when detach non-pinned group

Alex Williamson alex.williamson at redhat.com
Thu Jan 21 13:05:48 EST 2021


On Mon, 18 Jan 2021 20:25:09 +0800
Keqian Zhu <zhukeqian1 at huawei.com> wrote:

> On 2021/1/16 2:01, Alex Williamson wrote:
> > On Fri, 15 Jan 2021 17:26:42 +0800
> > Keqian Zhu <zhukeqian1 at huawei.com> wrote:
> >   
> >> If a group with non-pinned-page dirty scope is detached with dirty
> >> logging enabled, we should fully populate the dirty bitmaps at the
> >> time it's removed since we don't know the extent of its previous DMA,
> >> nor will the group be present to trigger the full bitmap when the user
> >> retrieves the dirty bitmap.
> >>
> >> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
> >> Suggested-by: Alex Williamson <alex.williamson at redhat.com>
> >> Signed-off-by: Keqian Zhu <zhukeqian1 at huawei.com>
> >> ---
> >>  drivers/vfio/vfio_iommu_type1.c | 18 +++++++++++++++++-
> >>  1 file changed, 17 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >> index 0b4dedaa9128..4e82b9a3440f 100644
> >> --- a/drivers/vfio/vfio_iommu_type1.c
> >> +++ b/drivers/vfio/vfio_iommu_type1.c
> >> @@ -236,6 +236,19 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
> >>  	}
> >>  }
> >>  
> >> +static void vfio_iommu_populate_bitmap_full(struct vfio_iommu *iommu)
> >> +{
> >> +	struct rb_node *n;
> >> +	unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
> >> +
> >> +	for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
> >> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> >> +
> >> +		if (dma->iommu_mapped)
> >> +			bitmap_set(dma->bitmap, 0, dma->size >> pgshift);
> >> +	}
> >> +}
> >> +
> >>  static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize)
> >>  {
> >>  	struct rb_node *n;
> >> @@ -2415,8 +2428,11 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
> >>  	 * Removal of a group without dirty tracking may allow the iommu scope
> >>  	 * to be promoted.
> >>  	 */
> >> -	if (update_dirty_scope)
> >> +	if (update_dirty_scope) {
> >>  		update_pinned_page_dirty_scope(iommu);
> >> +		if (iommu->dirty_page_tracking)
> >> +			vfio_iommu_populate_bitmap_full(iommu);
> >> +	}
> >>  	mutex_unlock(&iommu->lock);
> >>  }
> >>    
> > 
> > This doesn't do the right thing.  This marks the bitmap dirty if:
> > 
> >  * The detached group dirty scope was not limited to pinned pages
> > 
> >  AND
> > 
> >  * Dirty tracking is enabled
> > 
> >  AND
> > 
> >  * The vfio_dma is *currently* (ie. after the detach) iommu_mapped
> > 
> > We need to mark the bitmap dirty based on whether the vfio_dma *was*
> > iommu_mapped by the group that is now detached.  Thanks,
> > 
> > Alex
> >   
> Hi Alex,
> 
> Yes, I missed this point again :-(. The update_dirty_scope means we
> detached an iommu backed group, and that means the vfio_dma *was*
> iommu_mapped by this group, so we can populate full bitmap
> unconditionally, right?

To do it unconditionally, the assumption would be that all current
vfio_dmas are iommu_mapped.  It seems like it's deterministic that a
non-pinned-page scope group implies all vfio_dmas are iommu_mapped.  I
can't currently think of an exception.  Thanks,

Alex




More information about the linux-arm-kernel mailing list