[PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage

Guo Ren guoren at kernel.org
Sat Apr 16 23:45:49 PDT 2022


On Sun, Apr 17, 2022 at 2:31 PM Boqun Feng <boqun.feng at gmail.com> wrote:
>
> On Sun, Apr 17, 2022 at 12:51:38PM +0800, Guo Ren wrote:
> > Hi Boqun & Andrea,
> >
> > On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng at gmail.com> wrote:
> > >
> > > On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > > [...]
> > > >
> > > > If both the aq and rl bits are set, the atomic memory operation is
> > > > sequentially consistent and cannot be observed to happen before any
> > > > earlier memory operations or after any later memory operations in the
> > > > same RISC-V hart and to the same address domain.
> > > >                 "0:     lr.w     %[p],  %[c]\n"
> > > >                 "       sub      %[rc], %[p], %[o]\n"
> > > >                 "       bltz     %[rc], 1f\n".
> > > > -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > > +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > >                 "       bnez     %[rc], 0b\n"
> > > > -               "       fence    rw, rw\n"
> > > >                 "1:\n"
> > > > So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > > >
> > >
> > > Can .aqrl order memory accesses before and after it (not against itself,
> > > against each other), i.e. act as a full memory barrier? For example, can
> > From the RVWMO spec description, the .aqrl annotation appends the same
> > effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> >
>
> Thanks for the confirmation, btw, where can I find the RVWMO spec?
RVWMO section:
https://five-embeddev.com/riscv-isa-manual/latest/rvwmo.html#ch:memorymodel

ATOMIC instructions:
https://five-embeddev.com/riscv-isa-manual/latest/a.html#atomics

>
> > Not only .aqrl, and I think the below also could be an RCsc when
> > sc.w.aq is executed:
> > A: Pre-Access
> > B: lr.w.rl ADDR-0
> > ...
> > C: sc.w.aq ADDR-0
> > D: Post-Acess
> > Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > global memory order should be A->B->C->D when sc.w.aq is executed. For
> > the amoswap
> >
> > The purpose of the whole patchset is to reduce the usage of
> > independent fence rw, rw instructions, and maximize the usage of the
> > .aq/.rl/.aqrl aonntation of RISC-V.
> >
> >                 __asm__ __volatile__ (                                  \
> >                         "0:     lr.w %0, %2\n"                          \
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w.rl %1, %z4, %2\n"                  \
> >                         "       bnez %1, 0b\n"                          \
> >                         "       fence rw, rw\n"                         \
> >                         "1:\n"                                          \
> >
> > > we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > > following litmus test?
> > >
> > >     C lr-sc-aqrl-pair-vs-full-barrier
> > >
> > >     {}
> > >
> > >     P0(int *x, int *y, atomic_t *u)
> > >     {
> > >             int r0;
> > >             int r1;
> > >
> > >             WRITE_ONCE(*x, 1);
> > >             r0 = atomic_cmpxchg(u, 0, 1);
> > >             r1 = READ_ONCE(*y);
> > >     }
> > >
> > >     P1(int *x, int *y, atomic_t *v)
> > >     {
> > >             int r0;
> > >             int r1;
> > >
> > >             WRITE_ONCE(*y, 1);
> > >             r0 = atomic_cmpxchg(v, 0, 1);
> > >             r1 = READ_ONCE(*x);
> > >     }
> > >
> > >     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > I think my patchset won't affect the above sequence guarantee. Current
> > RISC-V implementation only gives RCsc when the original value is the
> > same at least once. So I prefer RISC-V cmpxchg should be:
> >
> >
> > -                       "0:     lr.w %0, %2\n"                          \
> > +                      "0:     lr.w.rl %0, %2\n"                          \
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w.rl %1, %z4, %2\n"                  \
> >                         "       bnez %1, 0b\n"                          \
> > -                       "       fence rw, rw\n"                         \
> >                         "1:\n"                                          \
> > +                        "       fence w, rw\n"                    \
> >
> > To give an unconditional RSsc for atomic_cmpxchg.
> >
>
> Note that Linux kernel doesn't require cmpxchg() to provide any order if
> cmpxchg() fails to update the memory location. So you won't need to
> strengthen the atomic_cmpxchg().
Thx for the clarification.

>
> Regards,
> Boqun
>
> > >
> > > Regards,
> > > Boqun
> >
> >
> >
> > --
> > Best Regards
> >  Guo Ren
> >
> > ML: https://lore.kernel.org/linux-csky/



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/



More information about the linux-riscv mailing list