[PATCH v3 03/13] iommu/dma: Force bouncing of the size is not cacheline-aligned

Christoph Hellwig hch at lst.de
Mon Nov 7 01:46:03 PST 2022


> +static inline bool dma_sg_kmalloc_needs_bounce(struct device *dev,
> +					       struct scatterlist *sg, int nents,
> +					       enum dma_data_direction dir)
> +{
> +	struct scatterlist *s;
> +	int i;
> +
> +	if (!IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) ||
> +	    dir == DMA_TO_DEVICE || dev_is_dma_coherent(dev))
> +		return false;

This part should be shared with dma-direct in a well documented helper.

> +	for_each_sg(sg, s, nents, i) {
> +		if (dma_kmalloc_needs_bounce(dev, s->length, dir))
> +			return true;
> +	}

And for this loop iteration I'd much prefer it to be out of line, and
also not available in a global helper.

But maybe someone can come up with a nice tweak to the dma-iommu
code to not require the extra sglist walk anyway.



More information about the linux-arm-kernel mailing list