GCC built-in atomic operations and memory barriers

Toby Douglass trd at 45mercystreet.com
Wed Nov 4 15:12:10 EST 2009


Russell King - ARM Linux wrote:
> On Wed, Nov 04, 2009 at 07:09:37PM +0100, Toby Douglass wrote:
>> This leads me to want to use smp_mb().  However, from what I can see,
>> this macro is only available via the linux kernel headers; it's not
>> available in user-mode.  Is this correct?
> 
> Correct.

Thanks.  It's often hard on the net to track down a negative answer.

[snip]

While we're talking about the GCC atomics...

This appears to be the current code for CAS;

  349        static inline unsigned long __cmpxchg(volatile void *ptr,
unsigned long old, unsigned long new, int size)

  352        unsigned long oldval, res;

[snip]

  382                do {
  383                        asm volatile("@ __cmpxchg4\n"
  384                        "       ldrex   %1, [%2]\n"
  385                        "       mov     %0, #0\n"
  386                        "       teq     %1, %3\n"
  387                        "       strexeq %0, %4, [%2]\n"
  388                                : "=&r" (res), "=&r" (oldval)
  389                                : "r" (ptr), "Ir" (old), "r" (new)
  390                                : "memory", "cc");
  391                } while (res);

The "mov %0, #0" - why is this inbetween the ldrex and strexeq?  it
seems to me it could just as well happen before the ldrex, and doing so
would reduce the time between the ldrex and strexeq and so reduce the
chance of someone else modifying our target.




More information about the linux-arm-kernel mailing list