[PATCH] arm64: enable EDAC on arm64
Will Deacon
will.deacon at arm.com
Thu Nov 7 04:51:39 EST 2013
On Wed, Nov 06, 2013 at 06:39:18PM +0000, Rob Herring wrote:
> On Wed, Nov 6, 2013 at 9:26 AM, Catalin Marinas <catalin.marinas at arm.com> wrote:
> > On Wed, Nov 06, 2013 at 01:02:24PM +0000, Rob Herring wrote:
> >> +static inline void atomic_scrub(void *va, u32 size)
> >> +{
> >> + unsigned int *virt_addr = va;
> >> + unsigned int temp, temp2;
> >> + unsigned int i;
> >> +
> >> + for (i = 0; i < size / sizeof(*virt_addr); i++, virt_addr++) {
> >> + /*
> >> + * No need to check for store failure, another write means
> >> + * the scrubbing has effectively already been done for us.
> >> + */
> >> + asm volatile("\n"
> >> + " ldxr %0, %2\n"
> >> + " stxr %w1, %0, %2\n"
> >> + : "=&r" (temp), "=&r" (temp2), "+Q" (virt_addr)
> >> + : : "cc");
> >
> > But failure of stxr does not necessarily mean another write. It can be
> > an interrupt, cache line migration etc. The exclusive monitor can be
> > emulated in many ways.
>
> Right, I was thinking I could simplify things.
>
> In that case, I could implement this with just "atomic64_add(0,
> virt_addr)", but is there any guarantee that atomic64_t has a size of
> 8 bytes and that I can simply increment an atomic64_t ptr?
>
> > BTW, can you not use 64-bit loads/stores?
>
> Correct, that should be a long instead of int.
Are we guaranteed that va is a 64-bit aligned pointer? Also, usual comment
about the "cc" clobber.
Will
More information about the linux-arm-kernel
mailing list