[PATCH v4 1/2] arm: lib: xor-neon: remove unnecessary GCC < 4.6 warning
Nick Desaulniers
ndesaulniers at google.com
Tue Jan 19 18:10:24 EST 2021
On Tue, Jan 19, 2021 at 2:04 PM Nick Desaulniers
<ndesaulniers at google.com> wrote:
>
> On Tue, Jan 19, 2021 at 1:35 PM Arnd Bergmann <arnd at kernel.org> wrote:
> >
> > On Tue, Jan 19, 2021 at 10:18 PM 'Nick Desaulniers' via Clang Built
> > Linux <clang-built-linux at googlegroups.com> wrote:
> > >
> > > On Tue, Jan 19, 2021 at 5:17 AM Adrian Ratiu <adrian.ratiu at collabora.com> wrote:
> > > > diff --git a/arch/arm/lib/xor-neon.c b/arch/arm/lib/xor-neon.c
> > > > index b99dd8e1c93f..f9f3601cc2d1 100644
> > > > --- a/arch/arm/lib/xor-neon.c
> > > > +++ b/arch/arm/lib/xor-neon.c
> > > > @@ -14,20 +14,22 @@ MODULE_LICENSE("GPL");
> > > > #error You should compile this file with '-march=armv7-a -mfloat-abi=softfp -mfpu=neon'
> > > > #endif
> > > >
> > > > +/*
> > > > + * TODO: Even though -ftree-vectorize is enabled by default in Clang, the
> > > > + * compiler does not produce vectorized code due to its cost model.
> > > > + * See: https://github.com/ClangBuiltLinux/linux/issues/503
> > > > + */
> > > > +#ifdef CONFIG_CC_IS_CLANG
> > > > +#warning Clang does not vectorize code in this file.
> > > > +#endif
> > >
> > > Arnd, remind me again why it's a bug that the compiler's cost model
> > > says it's faster to not produce a vectorized version of these loops?
> > > I stand by my previous comment: https://bugs.llvm.org/show_bug.cgi?id=40976#c8
> >
> > The point is that without vectorizing the code, there is no point in building
> > both the default xor code and a "neon" version that has to save/restore
> > the neon registers but doesn't actually use them.
> >
> > Arnd
>
> Thoughts? Also, Nathan brings up my previous point about restrict.
> This would benefit both GCC and Clang as they would not produce 2
> "versions" of the loop; one vectorized if the std::distance between x
> & y >= 8, one scalar otherwise. But the callers would have to ensure
> no overlap otherwise UB.
>
> diff --git a/include/asm-generic/xor.h b/include/asm-generic/xor.h
> index b62a2a56a4d4..7db16adc7d89 100644
> --- a/include/asm-generic/xor.h
> +++ b/include/asm-generic/xor.h
> @@ -7,12 +7,21 @@
>
> #include <linux/prefetch.h>
>
> +/* Overrule LLVM's cost model in order to always produce a vectorized loop
> + * version.
> + */
> +#if defined(__clang__) && defined(CONFIG_ARM)
> +#define FORCE_VECTORIZE _Pragma("clang loop vectorize(enable)")
> +#else
> +#define CLANG_FORCE_VECTORIZE
^ err, I had renamed it, but missed this. Should have been
`FORCE_VECTORIZE` but you catch the drift.
> +#endif
> +
> static void
> xor_8regs_2(unsigned long bytes, unsigned long *p1, unsigned long *p2)
> {
> long lines = bytes / (sizeof (long)) / 8;
>
> - do {
> + FORCE_VECTORIZE do {
> p1[0] ^= p2[0];
> p1[1] ^= p2[1];
> p1[2] ^= p2[2];
> @@ -32,7 +41,7 @@ xor_8regs_3(unsigned long bytes, unsigned long *p1,
> unsigned long *p2,
> {
> long lines = bytes / (sizeof (long)) / 8;
>
> - do {
> + FORCE_VECTORIZE do {
> p1[0] ^= p2[0] ^ p3[0];
> p1[1] ^= p2[1] ^ p3[1];
> p1[2] ^= p2[2] ^ p3[2];
> @@ -53,7 +62,7 @@ xor_8regs_4(unsigned long bytes, unsigned long *p1,
> unsigned long *p2,
> {
> long lines = bytes / (sizeof (long)) / 8;
>
> - do {
> + FORCE_VECTORIZE do {
> p1[0] ^= p2[0] ^ p3[0] ^ p4[0];
> p1[1] ^= p2[1] ^ p3[1] ^ p4[1];
> p1[2] ^= p2[2] ^ p3[2] ^ p4[2];
> @@ -75,7 +84,7 @@ xor_8regs_5(unsigned long bytes, unsigned long *p1,
> unsigned long *p2,
> {
> long lines = bytes / (sizeof (long)) / 8;
>
> - do {
> + FORCE_VECTORIZE do {
> p1[0] ^= p2[0] ^ p3[0] ^ p4[0] ^ p5[0];
> p1[1] ^= p2[1] ^ p3[1] ^ p4[1] ^ p5[1];
> p1[2] ^= p2[2] ^ p3[2] ^ p4[2] ^ p5[2];
> --
> Thanks,
> ~Nick Desaulniers
--
Thanks,
~Nick Desaulniers
More information about the linux-arm-kernel
mailing list