[RFC PATCH] crypto: riscv: scalar accelerated GHASH

Ard Biesheuvel ardb at kernel.org
Thu Apr 17 07:15:15 PDT 2025


On Thu, 17 Apr 2025 at 10:42, Qingfang Deng <dqfext at gmail.com> wrote:
>
> On Thu, Apr 17, 2025 at 3:58 PM Ard Biesheuvel <ardb at kernel.org> wrote:
> > > >
> > > > What is the use case for this? AIUI, the scalar AES instructions were
> > > > never implemented by anyone, so how do you expect this to be used in
> > > > practice?
> > >
> > > The use case _is_ AES-GCM, as you mentioned. Without this, computing
> > > GHASH can take a considerable amount of CPU time (monitored by perf).
> > >
> >
> > I see. But do you have a particular configuration in mind? Does it
> > have scalar AES too? I looked into that a while ago but I was told
> > that nobody actually incorporates that. So what about these
> > extensions? Are they commonly implemented?
>
> It's aes-generic.c (LUT-based) with accelerated GHASH.
>
> >
> > [0] https://web.git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=riscv-scalar-aes
> >
> > > > ...
> > > > > +static __always_inline __uint128_t get_unaligned_be128(const u8 *p)
> > > > > +{
> > > > > +       __uint128_t val;
> > > > > +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> > > >
> > > > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS means that get_unaligned_xxx()
> > > > helpers are cheap. Casting a void* to an aligned type is still UB as
> > > > per the C standard.
> > >
> > > Technically an unaligned access is UB but this pattern is widely used
> > > in networking code.
> > >
> >
> > Of course. But that is no reason to keep doing it.
> >
> > > >
> > > > So better to drop the #ifdef entirely, and just use the
> > > > get_unaligned_be64() helpers for both cases.
> > >
> > > Currently those helpers won't generate rev8 instructions, even if
> > > HAVE_EFFICIENT_UNALIGNED_ACCESS and RISCV_ISA_ZBB is set, so I have to
> > > implement my own version of this to reduce the number of instructions,
> > > and to align with the original OpenSSL implementation.
> > >
> >
> > So fix the helpers.
>
> The issue is that RISC-V GCC doesn’t emit efficient unaligned loads by default:
> - Not all RISC-V CPUs support unaligned access efficiently, so GCC
> falls back to conservative byte-wise code.

That makes sense.

> - There’s no clean way to force the optimized path - GCC only emits
> fast unaligned loads if tuned for a specific CPU (e.g., -mtune=size or
> -mtune=thead-c906), which the kernel doesn't typically do, even with
> HAVE_EFFICIENT_UNALIGNED_ACCESS.
>
> Maybe we should raise this with the GCC maintainers. An explicit
> option to enable optimized unaligned access could help.
>

HAVE_EFFICIENT_UNALIGNED_ACCESS is a build time setting, so the
resulting kernel only runs correctly on hardware that implements
unaligned accesses in hardware.

So that means you could pass this -mtune= option too in that case, no?
Then, you can just use a packed struct or an __aligned(1) annotation
and the compiler will emit the correct code for you, depending on
whether unaligned accesses are permitted.



More information about the linux-riscv mailing list