[PATCH] arm64: dma: Drop cache invalidation from arch_dma_prep_coherent()

Will Deacon will at kernel.org
Tue Aug 23 05:21:11 PDT 2022


arch_dma_prep_coherent() is called when preparing a non-cacheable region
for a consistent DMA buffer allocation. Since the buffer pages may
previously have been written via a cacheable mapping and consequently
allocated as dirty cachelines, the purpose of this function is to remove
these dirty lines from the cache, writing them back so that the
non-coherent device is able to see them.

On arm64, this operation can be achieved with a clean to the point of
coherency; a subsequent invalidation is not required and serves little
purpose in the presence of a cacheable alias (e.g. the linear map),
since clean lines can be speculatively fetched back into the cache after
the invalidation operation has completed.

Relax the cache maintenance in arch_dma_prep_coherent() so that only a
clean, and not a clean-and-invalidate operation is performed.

Cc: Catalin Marinas <catalin.marinas at arm.com>
Cc: Mark Rutland <mark.rutland at arm.com>
Cc: Robin Murphy <robin.murphy at arm.com>
Cc: Christoph Hellwig <hch at lst.de>
Cc: Ard Biesheuvel <ardb at kernel.org>
Signed-off-by: Will Deacon <will at kernel.org>
---

I'm slightly wary about this change as other architectures seem to do
clean+invalidate here, but I'd like to hear what others think in any
case.

 arch/arm64/mm/dma-mapping.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 599cf81f5685..83a512a6ff0d 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -36,7 +36,7 @@ void arch_dma_prep_coherent(struct page *page, size_t size)
 {
 	unsigned long start = (unsigned long)page_address(page);
 
-	dcache_clean_inval_poc(start, start + size);
+	dcache_clean_poc(start, start + size);
 }
 
 #ifdef CONFIG_IOMMU_DMA
-- 
2.37.1.595.g718a3a8f04-goog




More information about the linux-arm-kernel mailing list