[PATCH 1/4] dma: define __dma_aligned attribute
Ahmad Fatoum
a.fatoum at pengutronix.de
Thu Sep 21 02:56:46 PDT 2023
Unlike the kernel, we always map barebox stack 1:1, so DMA to barebox
stack is ok if care is taken for alignment. Otherwise, cache maintenance
may end up clobbering data unintentionally.
Provide a __dma_aligned attribute for use in such situations and use the
already existing DMA_ALIGNMENT as alignment.
To be able to do that, we need to make sure that the default DMA_ALIGNMENT
is only defined when architectures don't define their own dma_alloc. If
they do, they are responsible to define their own DMA_ALIGNMENT as well.
The new attribute is intentionally not called __cacheline_aligned,
because it differs functionally: We care about the cache line size of
the outer cache, while in Linux __cacheline_aligned is for L1 cache.
A __dma_aligned wouldn't make sense for Linux as it would be too easy to
abuse (e.g. placing it on VMAP_STACK), but for barebox, we do this at
many places and an attribute would increase readability and even safety.
Signed-off-by: Ahmad Fatoum <a.fatoum at pengutronix.de>
---
include/dma.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/dma.h b/include/dma.h
index 2a09b747d1e2..469c482e7a3a 100644
--- a/include/dma.h
+++ b/include/dma.h
@@ -17,17 +17,19 @@
#define DMA_ADDRESS_BROKEN NULL
+#ifndef dma_alloc
#ifndef DMA_ALIGNMENT
#define DMA_ALIGNMENT 32
#endif
-#ifndef dma_alloc
static inline void *dma_alloc(size_t size)
{
return xmemalign(DMA_ALIGNMENT, ALIGN(size, DMA_ALIGNMENT));
}
#endif
+#define __dma_aligned __attribute__((__aligned__((DMA_ALIGNMENT))))
+
#ifndef dma_free
static inline void dma_free(void *mem)
{
--
2.39.2
More information about the barebox
mailing list