[PATCH 4/6] ARM: locks: prefetch the destination word for write prior to strex

Will Deacon will.deacon at arm.com
Thu Jul 25 14:05:18 EDT 2013


On Thu, Jul 25, 2013 at 06:55:44PM +0100, Stephen Boyd wrote:
> On 07/25/13 10:45, Will Deacon wrote:
> > On Thu, Jul 25, 2013 at 06:37:48PM +0100, Stephen Boyd wrote:
> >> On 07/25/13 10:31, Stephen Boyd wrote:
> >>> Maybe I'm wrong, but can't you just remove the casts and leave the
> >>> function as static inline? const void * is pretty much telling the
> >>> compiler to turn off type checking.
> >>>
> >> Oh joy. Why is rwlock's lock member marked volatile?
> > Yeah, that was the problematic guy. However, I had to fix that anyway in
> > this patch because otherwise the definition for prefetchw when
> > !ARCH_HAS_PREFETCHW (which expands to __builtin_prefetch(x,1)) will explode.
> >
> > So, given that I've fixed the rwlocks, I think I could put prefetch and
> > prefetchw back to static inline functions. What do you reckon?
> 
> It would be good to match the builtin function's signature so that we
> don't explode in the future on ARCH_HAS_PREFETCHW configs.

Ok, so that's basically just undoing the macroisation on top of v2 (fixup
below).

Will

--->8

diff --git a/arch/arm/include/asm/processor.h b/arch/arm/include/asm/processor.h
index dde7ecc..dac9429 100644
--- a/arch/arm/include/asm/processor.h
+++ b/arch/arm/include/asm/processor.h
@@ -109,25 +109,25 @@ unsigned long get_wchan(struct task_struct *p);
 #if __LINUX_ARM_ARCH__ >= 5
 
 #define ARCH_HAS_PREFETCH
-#define prefetch(p)                                                    \
-({                                                                     \
-       __asm__ __volatile__(                                           \
-               "pld\t%a0"                                              \
-               :: "p" (p));                                            \
-})
+static inline void prefetch(const void *ptr)
+{
+       __asm__ __volatile__(
+               "pld\t%a0"
+               :: "p" (ptr));
+}
 
 #if __LINUX_ARM_ARCH__ >= 7 && defined(CONFIG_SMP)
 #define ARCH_HAS_PREFETCHW
-#define prefetchw(p)                                                   \
-({                                                                     \
-       __asm__ __volatile__(                                           \
-               ".arch_extension        mp\n"                           \
-               __ALT_SMP_ASM(                                          \
-                       WASM(pldw)              "\t%a0",                \
-                       WASM(pld)               "\t%a0"                 \
-               )                                                       \
-               :: "p" (p));                                            \
-})
+static inline void prefetchw(const void *ptr)
+{
+       __asm__ __volatile__(
+               ".arch_extension        mp\n"
+               __ALT_SMP_ASM(
+                       WASM(pldw)              "\t%a0",
+                       WASM(pld)               "\t%a0"
+               )
+               :: "p" (ptr));
+}
 #endif
 #endif




More information about the linux-arm-kernel mailing list