[PATCH v8 1/5] asm-generic: Improve csum_fold
Al Viro
viro at zeniv.linux.org.uk
Fri Oct 27 16:10:36 PDT 2023
On Fri, Oct 27, 2023 at 03:43:51PM -0700, Charlie Jenkins wrote:
> /*
> * computes the checksum of a memory block at buff, length len,
> * and adds in "sum" (32-bit)
> @@ -31,9 +33,7 @@ extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl);
> static inline __sum16 csum_fold(__wsum csum)
> {
> u32 sum = (__force u32)csum;
> - sum = (sum & 0xffff) + (sum >> 16);
> - sum = (sum & 0xffff) + (sum >> 16);
> - return (__force __sum16)~sum;
> + return (__force __sum16)((~sum - ror32(sum, 16)) >> 16);
> }
Will (~(sum + ror32(sum, 16))>>16 produce worse code than that?
Because at least with recent gcc this will generate the exact thing
you get from arm inline asm...
More information about the linux-riscv
mailing list