[PATCH v1 00/17] Provide a new two step DMA mapping API

Leon Romanovsky leon at kernel.org
Mon Nov 4 03:39:20 PST 2024


On Mon, Nov 04, 2024 at 10:58:31AM +0100, Christoph Hellwig wrote:
> On Thu, Oct 31, 2024 at 09:17:45PM +0000, Robin Murphy wrote:

<...>

> >>   2. VFIO PCI live migration code is building a very large "page list"
> >>      for the device. Instead of allocating a scatter list entry per allocated
> >>      page it can just allocate an array of 'struct page *', saving a large
> >>      amount of memory.
> >
> > VFIO already assumes a coherent device with (realistically) an IOMMU which 
> > it explicitly manages - why is it even pretending to need a generic DMA 
> > API?
> 
> AFAIK that does isn't really vfio as we know it but the control device
> for live migration.  But Leon or Jason might fill in more.

Yes, you are right, as it is written above "VFIO PCI live migration ...".
That piece of code directly connected to the underlying real HW device
and uses DMA API to provide live migration functionality to/from that
device.

> 
> The point is that quite a few devices have these page list based APIs
> (RDMA where mlx5 comes from, NVMe with PRPs, AHCI, GPUs).
> 
> >
> >>   3. NVMe PCI demonstrates how a BIO can be converted to a HW scatter
> >>      list without having to allocate then populate an intermediate SG table.
> >
> > As above, given that a bio_vec still deals in struct pages, that could 
> > seemingly already be done by just mapping the pages, so how is it proving 
> > any benefit of a fragile new interface?
> 
> Because we only need to preallocate the tiny constant sized dma_iova_state
> as part of the request instead of an additional scatterlist that requires
> sizeof(struct page *) + sizeof(dma_addr_t) + 3 * sizeof(unsigned int)
> per segment, including a memory allocation per I/O for that.
> 
> > My big concern here is that a thin and vaguely-defined wrapper around the 
> > IOMMU API is itself a step which smells strongly of "abuse and design 
> > mistake", given that the basic notion of allocating DMA addresses in 
> > advance clearly cannot generalise. Thus it really demands some considered 
> > justification beyond "We must do something; This is something; Therefore we 
> > must do this." to be convincing.
> 
> At least for the block code we have a nice little core wrapper that is
> very easy to use, and provides a great reduction of memory use and
> allocations.  The HMM use case I'll let others talk about.

I'm not sure about which wrappers Robin talks, but if we are talking
about HMM wrappers, they gave us perfect combination of usability,
performance and maintenance. All HMM users use same pattern, same
structures and don't need to worry about internal DMA/IOMMU details.

Thanks



More information about the Linux-nvme mailing list