[PATCH 5/6] dma-mapping: Allow batched DMA sync operations if supported by the arch
Leon Romanovsky
leon at kernel.org
Sun Dec 21 03:55:23 PST 2025
On Fri, Dec 19, 2025 at 01:36:57PM +0800, Barry Song wrote:
> From: Barry Song <v-songbaohua at oppo.com>
>
> This enables dma_direct_sync_sg_for_device, dma_direct_sync_sg_for_cpu,
> dma_direct_map_sg, and dma_direct_unmap_sg to use batched DMA sync
> operations when possible. This significantly improves performance on
> devices without hardware cache coherence.
>
> Tangquan's initial results show that batched synchronization can reduce
> dma_map_sg() time by 64.61% and dma_unmap_sg() time by 66.60% on an MTK
> phone platform (MediaTek Dimensity 9500). The tests were performed by
> pinning the task to CPU7 and fixing the CPU frequency at 2.6 GHz,
> running dma_map_sg() and dma_unmap_sg() on 10 MB buffers (10 MB / 4 KB
> sg entries per buffer) for 200 iterations and then averaging the
> results.
>
> Cc: Catalin Marinas <catalin.marinas at arm.com>
> Cc: Will Deacon <will at kernel.org>
> Cc: Marek Szyprowski <m.szyprowski at samsung.com>
> Cc: Robin Murphy <robin.murphy at arm.com>
> Cc: Ada Couprie Diaz <ada.coupriediaz at arm.com>
> Cc: Ard Biesheuvel <ardb at kernel.org>
> Cc: Marc Zyngier <maz at kernel.org>
> Cc: Anshuman Khandual <anshuman.khandual at arm.com>
> Cc: Ryan Roberts <ryan.roberts at arm.com>
> Cc: Suren Baghdasaryan <surenb at google.com>
> Cc: Tangquan Zheng <zhengtangquan at oppo.com>
> Signed-off-by: Barry Song <v-songbaohua at oppo.com>
> ---
> kernel/dma/direct.c | 28 ++++++++++-----
> kernel/dma/direct.h | 86 +++++++++++++++++++++++++++++++++++++++------
> 2 files changed, 95 insertions(+), 19 deletions(-)
<...>
> if (!dev_is_dma_coherent(dev))
> - arch_sync_dma_for_device(paddr, sg->length,
> - dir);
> + arch_sync_dma_for_device_batch_add(paddr, sg->length, dir);
<...>
> -static inline dma_addr_t dma_direct_map_phys(struct device *dev,
> +#ifdef CONFIG_ARCH_WANT_BATCHED_DMA_SYNC
> +static inline void dma_direct_sync_single_for_cpu_batch_add(struct device *dev,
> + dma_addr_t addr, size_t size, enum dma_data_direction dir)
> +{
> + phys_addr_t paddr = dma_to_phys(dev, addr);
> +
> + if (!dev_is_dma_coherent(dev))
> + arch_sync_dma_for_cpu_batch_add(paddr, size, dir);
> +
> + __dma_direct_sync_single_for_cpu(dev, paddr, size, dir);
> +}
> +#endif
> +
> +static inline void dma_direct_sync_single_for_cpu(struct device *dev,
> + dma_addr_t addr, size_t size, enum dma_data_direction dir)
> +{
> + phys_addr_t paddr = dma_to_phys(dev, addr);
> +
> + if (!dev_is_dma_coherent(dev))
> + arch_sync_dma_for_cpu(paddr, size, dir);
> +
> + __dma_direct_sync_single_for_cpu(dev, paddr, size, dir);
> +}
> +
I'm wondering why you don't implement this batch‑sync support inside the
arch_sync_dma_*() functions. Doing so would minimize changes to the generic
kernel/dma/* code and reduce the amount of #ifdef‑based spaghetti.
Thanks."
More information about the linux-arm-kernel
mailing list