[PATCH v3 07/17] dma-mapping: Implement link/unlink ranges API

Leon Romanovsky leon at kernel.org
Mon Nov 18 10:55:33 PST 2024


On Mon, Nov 18, 2024 at 02:59:30PM +0000, Will Deacon wrote:
> On Sun, Nov 10, 2024 at 03:46:54PM +0200, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro at nvidia.com>
> > 
> > Introduce new DMA APIs to perform DMA linkage of buffers
> > in layers higher than DMA.
> > 
> > In proposed API, the callers will perform the following steps.
> > In map path:
> > 	if (dma_can_use_iova(...))
> > 	    dma_iova_alloc()
> > 	    for (page in range)
> > 	       dma_iova_link_next(...)
> > 	    dma_iova_sync(...)
> > 	else
> > 	     /* Fallback to legacy map pages */
> >              for (all pages)
> > 	       dma_map_page(...)
> > 
> > In unmap path:
> > 	if (dma_can_use_iova(...))
> > 	     dma_iova_destroy()
> > 	else
> > 	     for (all pages)
> > 		dma_unmap_page(...)
> > 
> > Signed-off-by: Leon Romanovsky <leonro at nvidia.com>
> > ---
> >  drivers/iommu/dma-iommu.c   | 259 ++++++++++++++++++++++++++++++++++++
> >  include/linux/dma-mapping.h |  32 +++++
> >  2 files changed, 291 insertions(+)
> 

<...>

> > +static void __iommu_dma_iova_unlink(struct device *dev,
> > +		struct dma_iova_state *state, size_t offset, size_t size,
> > +		enum dma_data_direction dir, unsigned long attrs,
> > +		bool free_iova)
> > +{
> > +	struct iommu_domain *domain = iommu_get_dma_domain(dev);
> > +	struct iommu_dma_cookie *cookie = domain->iova_cookie;
> > +	struct iova_domain *iovad = &cookie->iovad;
> > +	dma_addr_t addr = state->addr + offset;
> > +	size_t iova_start_pad = iova_offset(iovad, addr);
> > +	struct iommu_iotlb_gather iotlb_gather;
> > +	size_t unmapped;
> > +
> > +	if ((state->__size & DMA_IOVA_USE_SWIOTLB) ||
> > +	    (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)))
> > +		iommu_dma_iova_unlink_range_slow(dev, addr, size, dir, attrs);
> > +
> > +	iommu_iotlb_gather_init(&iotlb_gather);
> > +	iotlb_gather.queued = free_iova && READ_ONCE(cookie->fq_domain);
> > +
> > +	size = iova_align(iovad, size + iova_start_pad);
> > +	addr -= iova_start_pad;
> > +	unmapped = iommu_unmap_fast(domain, addr, size, &iotlb_gather);
> > +	WARN_ON(unmapped != size);
> 
> Does the new API require that the 'size' passed to dma_iova_unlink()
> exactly match the 'size' passed to the corresponding call to
> dma_iova_link()? I ask because the IOMMU page-table code is built around
> the assumption that partial unmap() operations never occur (i.e.
> operations which could require splitting a huge mapping). We just
> removed [1] that code from the Arm IO page-table implementations, so it
> would be good to avoid adding it back for this.

dma_iova_link/dma_iova_unlink() don't have any assumptions in addition
to already existing for dma_map_sg/dma_unmap_sg(). In reality, it means
that all calls to unlink will have same size as for link.

Thanks

> 
> Will
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/iommu/linux.git/commit/?h=arm/smmu&id=33729a5fc0caf7a97d20507acbeee6b012e7e519



More information about the Linux-nvme mailing list