[RFC PATCH v3 4/5] riscv/cmpxchg: Implement cmpxchg for variables of size 1 and 2

Leonardo Bras Soares Passos leobras at redhat.com
Fri Aug 4 20:14:31 PDT 2023


Hello Guo Ren, thanks for the feedback!

On Fri, Aug 4, 2023 at 2:45 PM Guo Ren <guoren at kernel.org> wrote:
>
> On Fri, Aug 4, 2023 at 4:49 AM Leonardo Bras <leobras at redhat.com> wrote:
> >
> > cmpxchg for variables of size 1-byte and 2-bytes is not yet available for
> > riscv, even though its present in other architectures such as arm64 and
> > x86. This could lead to not being able to implement some locking mechanisms
> > or requiring some rework to make it work properly.
> >
> > Implement 1-byte and 2-bytes cmpxchg in order to achieve parity with other
> > architectures.
> >
> > Signed-off-by: Leonardo Bras <leobras at redhat.com>
> > ---
> >  arch/riscv/include/asm/cmpxchg.h | 35 ++++++++++++++++++++++++++++++++
> >  1 file changed, 35 insertions(+)
> >
> > diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> > index 5a07646fae65..dfb433ac544f 100644
> > --- a/arch/riscv/include/asm/cmpxchg.h
> > +++ b/arch/riscv/include/asm/cmpxchg.h
> > @@ -72,6 +72,36 @@
> >   * indicated by comparing RETURN with OLD.
> >   */
> >
> > +#define __arch_cmpxchg_mask(sc_sfx, prepend, append, r, p, o, n)       \
> > +({                                                                     \
> > +       /* Depends on 2-byte variables being 2-byte aligned */          \
> > +       ulong __s = ((ulong)(p) & 0x3) * BITS_PER_BYTE;                 \
> > +       ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0)   \
> > +                       << __s;                                         \
> > +       ulong __newx = (ulong)(n) << __s;                               \
> > +       ulong __oldx = (ulong)(o) << __s;                               \
> > +       ulong __retx;                                                   \
> > +       register unsigned int __rc;                                     \
> > +                                                                       \
> > +       __asm__ __volatile__ (                                          \
> > +               prepend                                                 \
> > +               "0:     lr.w %0, %2\n"                                  \
> > +               "       and  %0, %0, %z5\n"                             \
> > +               "       bne  %0, %z3, 1f\n"                             \

> bug:
> -               "       and  %0, %0, %z5\n"                             \
> -               "       bne  %0, %z3, 1f\n"                             \
> +               "       and  %1, %0, %z5\n"                             \
> +               "       bne  %1, %z3, 1f\n"                             \
> Your code breaks the %0.

What do you mean by breaks here?

In the end of this macro, I intended  to have __retx = (*p & __mask)
which means the value is clean to be rotated at the end of the macro
(no need to apply the mask again): r = __ret >> __s;

Also, I assumed we are supposed to return the same variable type
as the pointer, so this is valid:
u8 a, *b, c;
a = xchg(b, c);

Is this correct?

> > +               append                                                  \
> > +               "1:\n"                                                  \
> > +               : "=&r" (__retx), "=&r" (__rc), "+A" (*(p))             \
> > +               : "rJ" ((long)__oldx), "rJ" (__newx),                   \
> > +                 "rJ" (__mask), "rJ" (~__mask)                         \
> > +               : "memory");                                            \
> > +                                                                       \
> > +       r = (__typeof__(*(p)))(__retx >> __s);                          \
> > +})
> > +
> >
> >  #define __arch_cmpxchg(lr_sfx, sc_sfx, prepend, append, r, p, co, o, n)        \
> >  ({                                                                     \
> > @@ -98,6 +128,11 @@
> >         __typeof__(*(ptr)) __ret;                                       \
> >                                                                         \
> >         switch (sizeof(*__ptr)) {                                       \
> > +       case 1:                                                         \
> > +       case 2:                                                         \
> > +               __arch_cmpxchg_mask(sc_sfx, prepend, append,            \
> > +                                       __ret, __ptr, __old, __new);    \
> > +               break;                                                  \
> >         case 4:                                                         \
> >                 __arch_cmpxchg(".w", ".w" sc_sfx, prepend, append,      \
> >                                 __ret, __ptr, (long), __old, __new);    \
> > --
> > 2.41.0
> >
>
>
> --
> Best Regards
>  Guo Ren
>




More information about the linux-riscv mailing list