[RFC PATCH v1 00/18] Provide a new two step DMA API mapping API
Jason Gunthorpe
jgg at ziepe.ca
Tue Jul 9 12:03:20 PDT 2024
On Tue, Jul 09, 2024 at 08:20:15AM +0200, Christoph Hellwig wrote:
> On Mon, Jul 08, 2024 at 08:57:21PM -0300, Jason Gunthorpe wrote:
> > I understand the block stack already does this using P2P and !P2P, but
> > that isn't quite enough here as we want to split principally based on
> > IOMMU or !IOMMU.
>
> Except for the powerpc bypass IOMMU or not is a global decision,
> and the bypass is per I/O. So I'm not sure what else you want there?
For P2P we know if the DMA will go through the IOMMU or not based on
the PCIe fabric path between the initiator (the one doing the DMA) and
the target (the one providing the MMIO memory).
Depending on PCIe topology and ACS flags this path may use the IOMMU
or may skip the IOMMU.
To put it in code, the 'enum pci_p2pdma_map_type' can only be
determined once we know the initator and target struct device.
PCI_P2PDMA_MAP_BUS_ADDR means we don't use the iommu.
PCI_P2PDMA_MAP_THRU_HOST_BRIDGE means we do.
With this API it is important that a single request always has the
same PCI_P2PDMA_MAP_* outcome, and the simplest way to do that is to
split requests if the MMIO memory changes target struct devices.
Jason
More information about the Linux-nvme
mailing list