[RFC PATCH v2 0/6] Add IOPF support for VFIO passthrough

Shenming Lu lushenming at huawei.com
Tue Mar 9 06:22:01 GMT 2021


Hi,

The static pinning and mapping problem in VFIO and possible solutions
have been discussed a lot [1, 2]. One of the solutions is to add I/O
page fault support for VFIO devices. Different from those relatively
complicated software approaches such as presenting a vIOMMU that provides
the DMA buffer information (might include para-virtualized optimizations),
IOPF mainly depends on the hardware faulting capability, such as the PCIe
PRI extension or Arm SMMU stall model. What's more, the IOPF support in
the IOMMU driver is being implemented in SVA [3]. So we add IOPF support
for VFIO passthrough based on the IOPF part of SVA in this series.

We have measured its performance with UADK [4] (passthrough an accelerator
to a VM) on Hisilicon Kunpeng920 board:

Run hisi_sec_test...
 - with varying message lengths and sending times
 - with/without stage 2 IOPF enabled

when msg_len = 1MB and PREMAP_LEN (in patch 3) = 1:
           speed (KB/s)
 times     w/o IOPF        with IOPF (num of faults)        degradation
 1         325596          119152 (518)                     36.6%
 100       7524985         5804659 (1058)                   77.1%
 1000      8661817         8440209 (1071)                   97.4%
 5000      8804512         8724368 (1216)                   99.1%

If we use the same region to send messages, since page faults occur almost
only when first accessing, more times, less degradation.

when msg_len = 10MB and PREMAP_LEN = 512:
           speed (KB/s)
 times     w/o IOPF        with IOPF (num of faults)        degradation
 1         1012758         682257 (13)                      67.4%
 100       8680688         8374154 (26)                     96.5%
 1000      8860861         8719918 (26)                     98.4%

We see that pre-mapping can help.

And we also measured the performance of host SVA with the same params:

when msg_len = 1MB:
           speed (KB/s)
 times     w/o IOPF        with IOPF (num of faults)        degradation
 1         951672          163866 (512)                     17.2%
 100       8691961         4529971 (1024)                   52.1%
 1000      9158721         8376346 (1024)                   91.5%
 5000      9184532         9008739 (1024)                   98.1%

Besides, the avg time spent in vfio_iommu_dev_fault_handler() (in patch 3)
is little less than iopf_handle_group() (in SVA) (1.6 us vs 2.0 us).

History:

v1 -> v2
 - Numerous improvements following the suggestions. Thanks a lot to all
   of you.

Yet TODO:
 - Add support for PRI.
 - Consider selective-faulting. (suggested by Kevin)
 ...

Links:
[1] Lesokhin I, et al. Page Fault Support for Network Controllers. In ASPLOS,
    2016.
[2] Tian K, et al. coIOMMU: A Virtual IOMMU with Cooperative DMA Buffer Tracking
    for Efficient Memory Management in Direct I/O. In USENIX ATC, 2020.
[3] https://patchwork.kernel.org/project/linux-arm-kernel/cover/20210302092644.2553014-1-jean-philippe@linaro.org/
[4] https://github.com/Linaro/uadk

Any comments and suggestions are very welcome. :-)

Thanks,
Shenming


Shenming Lu (6):
  iommu: Evolve to support more scenarios of using IOPF
  vfio: Add an MMU notifier to avoid pinning
  vfio: Add a page fault handler
  vfio: VFIO_IOMMU_ENABLE_IOPF
  vfio: No need to statically pin and map if IOPF enabled
  vfio: Add nested IOPF support

 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   |   3 +-
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   |  11 +-
 drivers/iommu/io-pgfault.c                    |   4 -
 drivers/iommu/iommu.c                         |  56 ++-
 drivers/vfio/vfio.c                           | 118 +++++
 drivers/vfio/vfio_iommu_type1.c               | 446 +++++++++++++++++-
 include/linux/iommu.h                         |  21 +-
 include/linux/vfio.h                          |  14 +
 include/uapi/linux/iommu.h                    |   3 +
 include/uapi/linux/vfio.h                     |   6 +
 10 files changed, 661 insertions(+), 21 deletions(-)

-- 
2.19.1




More information about the linux-arm-kernel mailing list