[PATCH V2 3/3] riscv: xchg: Prefetch the destination word for sc.w
Guo Ren
guoren at kernel.org
Wed Jan 3 17:24:40 PST 2024
On Thu, Jan 4, 2024 at 3:45 AM Leonardo Bras <leobras at redhat.com> wrote:
>
> On Wed, Jan 03, 2024 at 02:15:45PM +0800, Guo Ren wrote:
> > On Tue, Jan 2, 2024 at 7:19 PM Andrew Jones <ajones at ventanamicro.com> wrote:
> > >
> > > On Sun, Dec 31, 2023 at 03:29:53AM -0500, guoren at kernel.org wrote:
> > > > From: Guo Ren <guoren at linux.alibaba.com>
> > > >
> > > > The cost of changing a cacheline from shared to exclusive state can be
> > > > significant, especially when this is triggered by an exclusive store,
> > > > since it may result in having to retry the transaction.
> > > >
> > > > This patch makes use of prefetch.w to prefetch cachelines for write
> > > > prior to lr/sc loops when using the xchg_small atomic routine.
> > > >
> > > > This patch is inspired by commit: 0ea366f5e1b6 ("arm64: atomics:
> > > > prefetch the destination word for write prior to stxr").
> > > >
> > > > Signed-off-by: Guo Ren <guoren at linux.alibaba.com>
> > > > Signed-off-by: Guo Ren <guoren at kernel.org>
> > > > ---
> > > > arch/riscv/include/asm/cmpxchg.h | 4 +++-
> > > > 1 file changed, 3 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> > > > index 26cea2395aae..d7b9d7951f08 100644
> > > > --- a/arch/riscv/include/asm/cmpxchg.h
> > > > +++ b/arch/riscv/include/asm/cmpxchg.h
> > > > @@ -10,6 +10,7 @@
> > > >
> > > > #include <asm/barrier.h>
> > > > #include <asm/fence.h>
> > > > +#include <asm/processor.h>
> > > >
> > > > #define __arch_xchg_masked(prepend, append, r, p, n) \
> > >
> > > Are you sure this is based on v6.7-rc7? Because I don't see this macro.
> > Oh, it is based on Leobras' patches. I would remove it in the next of version.
>
> I would say this next :)
Thx for the grammar correction.
>
> >
> > >
> > > > ({ \
> > > > @@ -23,6 +24,7 @@
> > > > \
> > > > __asm__ __volatile__ ( \
> > > > prepend \
> > > > + PREFETCHW_ASM(%5) \
> > > > "0: lr.w %0, %2\n" \
> > > > " and %1, %0, %z4\n" \
> > > > " or %1, %1, %z3\n" \
> > > > @@ -30,7 +32,7 @@
> > > > " bnez %1, 0b\n" \
> > > > append \
> > > > : "=&r" (__retx), "=&r" (__rc), "+A" (*(__ptr32b)) \
> > > > - : "rJ" (__newx), "rJ" (~__mask) \
> > > > + : "rJ" (__newx), "rJ" (~__mask), "rJ" (__ptr32b) \
> > >
> > > I'm pretty sure we don't want to allow the J constraint for __ptr32b.
> > >
> > > > : "memory"); \
> > > > \
> > > > r = (__typeof__(*(p)))((__retx & __mask) >> __s); \
> > > > --
> > > > 2.40.1
> > > >
> > >
> > > Thanks,
> > > drew
> >
> >
> >
> > --
> > Best Regards
> > Guo Ren
> >
>
> Nice patch :)
> Any reason it's not needed in __arch_cmpxchg_masked(), and __arch_cmpxchg() ?
CAS is a conditional AMO, unlike xchg (Stand AMO). Arm64 is wrong, or
they have a problem with the hardware.
>
> Thanks!
> Leo
>
--
Best Regards
Guo Ren
More information about the linux-riscv
mailing list