[PATCH 2/2] bitops: rotate: Add riscv implementation using Zbb extension
cp0613 at linux.alibaba.com
cp0613 at linux.alibaba.com
Sat Jun 28 05:08:16 PDT 2025
On Wed, 25 Jun 2025 17:02:34 +0100, david.laight.linux at gmail.com wrote:
> Is it even a gain in the zbb case?
> The "rorw" is only ever going to help full word rotates.
> Here you might as well do ((word << 8 | word) >> shift).
>
> For "rol8" you'd need ((word << 24 | word) 'rol' shift).
> I still bet the generic code is faster (but see below).
>
> Same for 16bit rotates.
>
> Actually the generic version is (probably) horrid for everything except x86.
> See https://www.godbolt.org/z/xTxYj57To
Thanks for your suggestion, this website is very inspiring. According to the
results, the generic version is indeed the most friendly to x86. I think this
is also a reason why other architectures should be optimized. Take the riscv64
ror32 implementation as an example, compare the number of assembly instructions
of the following two functions:
```
u32 zbb_opt_ror32(u32 word, unsigned int shift)
{
asm volatile(
".option push\n"
".option arch,+zbb\n"
"rorw %0, %1, %2\n"
".option pop\n"
: "=r" (word) : "r" (word), "r" (shift) :);
return word;
}
u16 generic_ror32(u16 word, unsigned int shift)
{
return (word >> (shift & 31)) | (word << ((-shift) & 31));
}
```
Their disassembly is:
```
zbb_opt_ror32:
<+0>: addi sp,sp,-16
<+2>: sd s0,0(sp)
<+4>: sd ra,8(sp)
<+6>: addi s0,sp,16
<+8>: .insn 4, 0x60b5553b
<+12>: ld ra,8(sp)
<+14>: ld s0,0(sp)
<+16>: sext.w a0,a0
<+18>: addi sp,sp,16
<+20>: ret
generic_ror32:
<+0>: addi sp,sp,-16
<+2>: andi a1,a1,31
<+4>: sd s0,0(sp)
<+6>: sd ra,8(sp)
<+8>: addi s0,sp,16
<+10>: negw a5,a1
<+14>: sllw a5,a0,a5
<+18>: ld ra,8(sp)
<+20>: ld s0,0(sp)
<+22>: srlw a0,a0,a1
<+26>: or a0,a0,a5
<+28>: slli a0,a0,0x30
<+30>: srli a0,a0,0x30
<+32>: addi sp,sp,16
<+34>: ret
```
It can be found that the zbb optimized implementation uses fewer instructions,
even for 16-bit and 8-bit data.
More information about the linux-riscv
mailing list