[PATCH v2 4/8] dma-mapping: Separate DMA sync issuing and completion waiting

Leon Romanovsky leon at kernel.org
Sat Dec 27 12:07:06 PST 2025


On Sat, Dec 27, 2025 at 11:52:44AM +1300, Barry Song wrote:
> From: Barry Song <baohua at kernel.org>
> 
> Currently, arch_sync_dma_for_cpu and arch_sync_dma_for_device
> always wait for the completion of each DMA buffer. That is,
> issuing the DMA sync and waiting for completion is done in a
> single API call.
> 
> For scatter-gather lists with multiple entries, this means
> issuing and waiting is repeated for each entry, which can hurt
> performance. Architectures like ARM64 may be able to issue all
> DMA sync operations for all entries first and then wait for
> completion together.
> 
> To address this, arch_sync_dma_for_* now issues DMA operations in
> batch, followed by a flush. On ARM64, the flush is implemented
> using a dsb instruction within arch_sync_dma_flush().
> 
> For now, add arch_sync_dma_flush() after each
> arch_sync_dma_for_*() call. arch_sync_dma_flush() is defined as a
> no-op on all architectures except arm64, so this patch does not
> change existing behavior. Subsequent patches will introduce true
> batching for SG DMA buffers.
> 
> Cc: Leon Romanovsky <leon at kernel.org>
> Cc: Catalin Marinas <catalin.marinas at arm.com>
> Cc: Will Deacon <will at kernel.org>
> Cc: Marek Szyprowski <m.szyprowski at samsung.com>
> Cc: Robin Murphy <robin.murphy at arm.com>
> Cc: Ada Couprie Diaz <ada.coupriediaz at arm.com>
> Cc: Ard Biesheuvel <ardb at kernel.org>
> Cc: Marc Zyngier <maz at kernel.org>
> Cc: Anshuman Khandual <anshuman.khandual at arm.com>
> Cc: Ryan Roberts <ryan.roberts at arm.com>
> Cc: Suren Baghdasaryan <surenb at google.com>
> Cc: Joerg Roedel <joro at 8bytes.org>
> Cc: Juergen Gross <jgross at suse.com>
> Cc: Stefano Stabellini <sstabellini at kernel.org>
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko at epam.com>
> Cc: Tangquan Zheng <zhengtangquan at oppo.com>
> Signed-off-by: Barry Song <baohua at kernel.org>
> ---
>  arch/arm64/include/asm/cache.h |  6 ++++++
>  arch/arm64/mm/dma-mapping.c    |  4 ++--
>  drivers/iommu/dma-iommu.c      | 37 +++++++++++++++++++++++++---------
>  drivers/xen/swiotlb-xen.c      | 24 ++++++++++++++--------
>  include/linux/dma-map-ops.h    |  6 ++++++
>  kernel/dma/direct.c            |  8 ++++++--
>  kernel/dma/direct.h            |  9 +++++++--
>  kernel/dma/swiotlb.c           |  4 +++-
>  8 files changed, 73 insertions(+), 25 deletions(-)

<...>

> +#ifndef arch_sync_dma_flush
> +static inline void arch_sync_dma_flush(void)
> +{
> +}
> +#endif

Over the weekend I realized a useful advantage of the ARCH_HAVE_* config
options: they make it straightforward to inspect the entire DMA path simply
by looking at the .config.

Thanks,
Reviewed-by: Leon Romanovsky <leonro at nvidia.com>



More information about the linux-arm-kernel mailing list