[PATCH v3 03/13] iommu/dma: Force bouncing of the size is not cacheline-aligned

Catalin Marinas catalin.marinas at arm.com
Mon Nov 7 02:54:36 PST 2022


On Mon, Nov 07, 2022 at 10:46:03AM +0100, Christoph Hellwig wrote:
> > +static inline bool dma_sg_kmalloc_needs_bounce(struct device *dev,
> > +					       struct scatterlist *sg, int nents,
> > +					       enum dma_data_direction dir)
> > +{
> > +	struct scatterlist *s;
> > +	int i;
> > +
> > +	if (!IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) ||
> > +	    dir == DMA_TO_DEVICE || dev_is_dma_coherent(dev))
> > +		return false;
> 
> This part should be shared with dma-direct in a well documented helper.
> 
> > +	for_each_sg(sg, s, nents, i) {
> > +		if (dma_kmalloc_needs_bounce(dev, s->length, dir))
> > +			return true;
> > +	}
> 
> And for this loop iteration I'd much prefer it to be out of line, and
> also not available in a global helper.
> 
> But maybe someone can come up with a nice tweak to the dma-iommu
> code to not require the extra sglist walk anyway.

An idea: we could add another member to struct scatterlist to track the
bounced address. We can then do the bouncing in a similar way to
iommu_dma_map_sg_swiotlb() but without the iova allocation. The latter
would be a common path for both the bounced and non-bounced cases.

-- 
Catalin



More information about the linux-arm-kernel mailing list