[PATCH 4/7] vfio: iommu_type1: Fix missing dirty page when promote pinned_scope
Alex Williamson
alex.williamson at redhat.com
Tue Dec 15 10:53:59 EST 2020
On Tue, 15 Dec 2020 17:37:11 +0800
zhukeqian <zhukeqian1 at huawei.com> wrote:
> Hi Alex,
>
> On 2020/12/15 8:04, Alex Williamson wrote:
> > On Thu, 10 Dec 2020 15:34:22 +0800
> > Keqian Zhu <zhukeqian1 at huawei.com> wrote:
> >
> >> When we pin or detach a group which is not dirty tracking capable,
> >> we will try to promote pinned_scope of vfio_iommu.
> >>
> >> If we succeed to do so, vfio only report pinned_scope as dirty to
> >> userspace next time, but these memory written before pin or detach
> >> is missed.
> >>
> >> The solution is that we must populate all dma range as dirty before
> >> promoting pinned_scope of vfio_iommu.
> >
> > Please don't bury fixes patches into a series with other optimizations
> > and semantic changes. Send it separately.
> >
> OK, I will.
>
> >>
> >> Signed-off-by: Keqian Zhu <zhukeqian1 at huawei.com>
> >> ---
> >> drivers/vfio/vfio_iommu_type1.c | 18 ++++++++++++++++++
> >> 1 file changed, 18 insertions(+)
> >>
> >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >> index bd9a94590ebc..00684597b098 100644
> >> --- a/drivers/vfio/vfio_iommu_type1.c
> >> +++ b/drivers/vfio/vfio_iommu_type1.c
> >> @@ -1633,6 +1633,20 @@ static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> >> return group;
> >> }
> >>
> >> +static void vfio_populate_bitmap_all(struct vfio_iommu *iommu)
> >> +{
> >> + struct rb_node *n;
> >> + unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
> >> +
> >> + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
> >> + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> >> + unsigned long nbits = dma->size >> pgshift;
> >> +
> >> + if (dma->iommu_mapped)
> >> + bitmap_set(dma->bitmap, 0, nbits);
> >> + }
> >> +}
> >
> >
> > If we detach a group which results in only non-IOMMU backed mdevs,
> > don't we also clear dma->iommu_mapped as part of vfio_unmap_unpin()
> > such that this test is invalid? Thanks,
>
> Good spot :-). The code will skip bitmap_set under this situation.
>
> We should set the bitmap unconditionally when vfio_iommu is promoted,
> as we must have IOMMU backed domain before promoting the vfio_iommu.
>
> Besides, I think we should also mark dirty in vfio_remove_dma if dirty
> tracking is active. Right?
There's no remaining bitmap to mark dirty if the vfio_dma is removed.
In this case it's the user's responsibility to collect remaining dirty
pages using the VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP support in the
VFIO_IOMMU_UNMAP_DMA ioctl. Thanks,
Alex
> >> +
> >> static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu)
> >> {
> >> struct vfio_domain *domain;
> >> @@ -1657,6 +1671,10 @@ static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu)
> >> }
> >>
> >> iommu->pinned_page_dirty_scope = true;
> >> +
> >> + /* Set all bitmap to avoid missing dirty page */
> >> + if (iommu->dirty_page_tracking)
> >> + vfio_populate_bitmap_all(iommu);
> >> }
> >>
> >> static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
> >
> > .
> >
>
More information about the linux-arm-kernel
mailing list