[PATCH v11 4/5] riscv: Add checksum library

Wang, Xiao W xiao.w.wang at intel.com
Fri Nov 24 22:42:35 PST 2023



> -----Original Message-----
> From: Charlie Jenkins <charlie at rivosinc.com>
> Sent: Saturday, November 18, 2023 5:28 AM
> To: Charlie Jenkins <charlie at rivosinc.com>; Palmer Dabbelt
> <palmer at dabbelt.com>; Conor Dooley <conor at kernel.org>; Samuel Holland
> <samuel.holland at sifive.com>; David Laight <David.Laight at aculab.com>;
> Wang, Xiao W <xiao.w.wang at intel.com>; Evan Green <evan at rivosinc.com>;
> linux-riscv at lists.infradead.org; linux-kernel at vger.kernel.org; linux-
> arch at vger.kernel.org
> Cc: Paul Walmsley <paul.walmsley at sifive.com>; Albert Ou
> <aou at eecs.berkeley.edu>; Arnd Bergmann <arnd at arndb.de>; Conor Dooley
> <conor.dooley at microchip.com>
> Subject: [PATCH v11 4/5] riscv: Add checksum library
> 
> Provide a 32 and 64 bit version of do_csum. When compiled for 32-bit
> will load from the buffer in groups of 32 bits, and when compiled for
> 64-bit will load in groups of 64 bits.
> 
> Additionally provide riscv optimized implementation of csum_ipv6_magic.
> 
> Signed-off-by: Charlie Jenkins <charlie at rivosinc.com>
> Acked-by: Conor Dooley <conor.dooley at microchip.com>
> ---
>  arch/riscv/include/asm/checksum.h |  13 +-
>  arch/riscv/lib/Makefile           |   1 +
>  arch/riscv/lib/csum.c             | 326
> ++++++++++++++++++++++++++++++++++++++
>  3 files changed, 339 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/riscv/include/asm/checksum.h
> b/arch/riscv/include/asm/checksum.h
> index 2fcf864186e7..3fa04ff1eda8 100644
> --- a/arch/riscv/include/asm/checksum.h
> +++ b/arch/riscv/include/asm/checksum.h
> @@ -12,6 +12,17 @@
> 
[...]

> + * misaligned accesses, or when buff is known to be aligned.
> + */
> +static inline __no_sanitize_address unsigned int
> +do_csum_no_alignment(const unsigned char *buff, int len)
> +{
> +	unsigned long csum, data;
> +	const unsigned long *ptr, *end;
> +
> +	ptr = (const unsigned long *)(buff);
> +	data = *(ptr++);
> +
> +	kasan_check_read(buff, len);
> +
> +	end = (const unsigned long *)(buff + len);
> +	csum = do_csum_common(ptr, end, data);
> +
> +	/*
> +	 * Zbb support saves 6 instructions, so not worth checking without
> +	 * alternatives if supported
> +	 */
> +	if (IS_ENABLED(CONFIG_RISCV_ISA_ZBB) &&
> +	    IS_ENABLED(CONFIG_RISCV_ALTERNATIVE)) {
> +		unsigned long fold_temp;
> +
> +		/*
> +		 * Zbb is likely available when the kernel is compiled with Zbb
> +		 * support, so nop when Zbb is available and jump when Zbb
> is
> +		 * not available.
> +		 */
> +		asm_volatile_goto(ALTERNATIVE("j %l[no_zbb]", "nop", 0,
> +					      RISCV_ISA_EXT_ZBB, 1)
> +				  :
> +				  :
> +				  :
> +				  : no_zbb);
> +
> +#ifdef CONFIG_32BIT
> +		asm (".option push				\n\
> +		.option arch,+zbb				\n\
> +			rori	%[fold_temp], %[csum], 16	\n\
> +			add	%[csum], %[fold_temp], %[csum]	\n\
> +		.option pop"
> +			: [csum] "+r" (csum), [fold_temp] "=&r" (fold_temp)
> +			:
> +			: );
> +
> +#else /* !CONFIG_32BIT */
> +		asm (".option push				\n\
> +		.option arch,+zbb				\n\
> +			rori	%[fold_temp], %[csum], 32	\n\
> +			add	%[csum], %[fold_temp], %[csum]	\n\
> +			srli	%[csum], %[csum], 32		\n\
> +			roriw	%[fold_temp], %[csum], 16	\n\
> +			addw	%[csum], %[fold_temp], %[csum]	\n\
> +		.option pop"
> +			: [csum] "+r" (csum), [fold_temp] "=&r" (fold_temp)
> +			:
> +			: );
> +#endif /* !CONFIG_32BIT */
> +		return csum >> 16;
> +	}
> +no_zbb:
> +#ifndef CONFIG_32BIT
> +	csum += ror64(csum, 32);
> +	csum >>= 32;
> +#endif
> +	csum = (u32)csum + ror32((u32)csum, 16);
> +	return csum >> 16;
> +}
> +
> +/*
> + * Perform a checksum on an arbitrary memory address.
> + * Will do a light-weight address alignment if buff is misaligned, unless
> + * cpu supports fast misaligned accesses.
> + */
> +unsigned int do_csum(const unsigned char *buff, int len)
> +{
> +	if (unlikely(len <= 0))
> +		return 0;
> +
> +	/*
> +	 * Significant performance gains can be seen by not doing alignment
> +	 * on machines with fast misaligned accesses.
> +	 *
> +	 * There is some duplicate code between the "with_alignment" and
> +	 * "no_alignment" implmentations, but the overlap is too awkward to
> be
> +	 * able to fit in one function without introducing multiple static
> +	 * branches. The largest chunk of overlap was delegated into the
> +	 * do_csum_common function.
> +	 */
> +	if (static_branch_likely(&fast_misaligned_access_speed_key))
> +		return do_csum_no_alignment(buff, len);
> +
> +	if (((unsigned long)buff & OFFSET_MASK) == 0)
> +		return do_csum_no_alignment(buff, len);
> +
> +	return do_csum_with_alignment(buff, len);
> +}
> 
> --
> 2.34.1

Reviewed-by: Xiao Wang <xiao.w.wang at intel.com>

BRs,
Xiao


More information about the linux-riscv mailing list