[PATCH 1/5] lib/crc: arm64: Drop unnecessary chunking logic from crc64

Ard Biesheuvel ardb at kernel.org
Tue Mar 31 23:57:23 PDT 2026



On Wed, 1 Apr 2026, at 00:33, Eric Biggers wrote:
> On Mon, Mar 30, 2026 at 04:46:32PM +0200, Ard Biesheuvel wrote:
>> On arm64, kernel mode NEON executes with preemption enabled, so there is
>> no need to chunk the input by hand.
>> 
>> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
>
> There's still similar "chunking" in other arm64 code:
>
>     $ git grep -E 'SZ_4K|cond_yield' lib/crypto/arm64
>     lib/crypto/arm64/chacha.h:              unsigned int todo = 
> min_t(unsigned int, bytes, SZ_4K);
>     lib/crypto/arm64/poly1305.h:                    unsigned int todo = 
> min_t(unsigned int, len, SZ_4K);
>     lib/crypto/arm64/sha1-ce-core.S:        cond_yield      1f, x5, x6
>     lib/crypto/arm64/sha256-ce.S:   cond_yield      1f, x5, x6
>     lib/crypto/arm64/sha3-ce-core.S:        cond_yield 4f, x8, x9
>     lib/crypto/arm64/sha512-ce-core.S:      cond_yield      3f, x4, x5
>
> I thought it was still sticking around, despite kernel-mode NEON now
> being preemptible on arm64, because of CONFIG_PREEMPT_VOLUNTARY.
>
> However, I see that support for CONFIG_PREEMPT_VOLUNTARY was recently
> removed on arm64.  So that's what finally makes this no longer needed,
> and we can now clean up these other cases too, right?
>

Indeed.



More information about the linux-arm-kernel mailing list