[PATCH v5 1/4] asm-generic: Improve csum_fold
David Laight
David.Laight at ACULAB.COM
Fri Sep 15 00:29:23 PDT 2023
From: Charlie Jenkins
> Sent: 15 September 2023 04:50
>
> This csum_fold implementation introduced into arch/arc by Vineet Gupta
> is better than the default implementation on at least arc, x86, arm, and
> riscv. Using GCC trunk and compiling non-inlined version, this
> implementation has 41.6667%, 25%, 16.6667% fewer instructions on
> riscv64, x86-64, and arm64 respectively with -O3 optimization.
Nit-picking the commit message...
Some of those architectures have their own asm implementation.
The arm one is better than the C code below, the x86 ones aren't.
I think that only sparc32 (carry flag but no rotate) and
arm/arm64 (barrel shifter on every instruction) have versions
that are better than the one here.
Since I suggested it to Charlie:
Reviewed-by: David Laight <david.laight at aculab.com>
>
> Signed-off-by: Charlie Jenkins <charlie at rivosinc.com>
> ---
> include/asm-generic/checksum.h | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/include/asm-generic/checksum.h b/include/asm-generic/checksum.h
> index 43e18db89c14..adab9ac4312c 100644
> --- a/include/asm-generic/checksum.h
> +++ b/include/asm-generic/checksum.h
> @@ -30,10 +30,7 @@ extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl);
> */
> static inline __sum16 csum_fold(__wsum csum)
> {
> - u32 sum = (__force u32)csum;
You'll need to re-instate that line to stop sparse complaining.
> - sum = (sum & 0xffff) + (sum >> 16);
> - sum = (sum & 0xffff) + (sum >> 16);
> - return (__force __sum16)~sum;
> + return (__force __sum16)((~csum - ror32(csum, 16)) >> 16);
> }
> #endif
>
>
> --
> 2.42.0
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
More information about the linux-riscv
mailing list