[RFC PATCH] ARM: Change the mandatory barriers implementation
Catalin Marinas
catalin.marinas at arm.com
Wed Feb 3 11:15:34 EST 2010
(I cc'ed LKML as well just in case I got the wrong semantics of the
mandatory barriers)
The mandatory barriers (mb, rmb, wmb) are used even on uniprocessor
systems for things like ordering Normal Non-cacheable memory accesses
with DMA transfer (via Device memory writes). The current implementation
uses dmb() for mb() and friends but this is not sufficient. The DMB only
ensures the ordering of accesses with regards to a single observer
accessing the same memory. If a DMA transfer is started by a write to
Device memory, the data to be transfered may not reach the main memory
(even if mapped as Normal Non-cacheable) before the device receives the
notification to begin the transfer. The only barrier that would help in
this situation is DSB which would completely drain the write buffers.
The patch also adds support for platform-defined barriers that can be
defined in mach/barriers.h. This is required by at least two platforms -
MSM and RealView (possible OMAP as well). On RealView with an outer
cache (PL310 for example) stores to Normal Non-cacheable memory are
buffered by the outer cache but the DSB doesn't go as far as this. A
separate L2x0 sync command is required (a store to Strongly Ordered
memory would do as well, similar to the MSM requirements and maybe
faster).
Note that the SMP barriers are not affected as they only deal with
ordering in Normal memory. There is however a situation with the use of
IPIs. A DMB is not enough to ensure that a write to Normal memory is
strictly ordered with respect to the IPI generation (and interrupt
handling). A solution is for the users of smp_call_function() to use a
mandatory barrier.
Signed-off-by: Catalin Marinas <catalin.marinas at arm.com>
Cc: Russell King <linux at arm.linux.org.uk>
Cc: Daniel Walker <dwalker at codeaurora.org>
Cc: Larry Bassel <lbassel at quicinc.com>
Cc: Tony Lindgren <tony at atomide.com>
---
arch/arm/include/asm/system.h | 18 ++++++++----------
arch/arm/mm/Kconfig | 6 ++++++
2 files changed, 14 insertions(+), 10 deletions(-)
diff --git a/arch/arm/include/asm/system.h b/arch/arm/include/asm/system.h
index 058e7e9..477861d 100644
--- a/arch/arm/include/asm/system.h
+++ b/arch/arm/include/asm/system.h
@@ -138,14 +138,12 @@ extern unsigned int user_debug;
#define dmb() __asm__ __volatile__ ("" : : : "memory")
#endif
-#if __LINUX_ARM_ARCH__ >= 7 || defined(CONFIG_SMP)
-#define mb() dmb()
-#define rmb() dmb()
-#define wmb() dmb()
+#ifdef CONFIG_ARCH_HAS_BARRIERS
+#include <mach/barriers.h>
#else
-#define mb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0)
-#define rmb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0)
-#define wmb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0)
+#define mb() dsb()
+#define rmb() dmb()
+#define wmb() dsb()
#endif
#ifndef CONFIG_SMP
@@ -153,9 +151,9 @@ extern unsigned int user_debug;
#define smp_rmb() barrier()
#define smp_wmb() barrier()
#else
-#define smp_mb() mb()
-#define smp_rmb() rmb()
-#define smp_wmb() wmb()
+#define smp_mb() dmb()
+#define smp_rmb() dmb()
+#define smp_wmb() dmb()
#endif
#define read_barrier_depends() do { } while(0)
diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index f62beb7..f67f2c4 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -802,3 +802,9 @@ config ARM_L1_CACHE_SHIFT
int
default 6 if ARCH_OMAP3 || ARCH_S5PC1XX
default 5
+
+config ARCH_HAS_BARRIERS
+ bool
+ help
+ This option allows the use of custom mandatory barriers
+ included via the mach/barriers.h file.
More information about the linux-arm-kernel
mailing list