[PATCH 1/5] lib/crc: arm64: Drop unnecessary chunking logic from crc64

Eric Biggers ebiggers at kernel.org
Tue Mar 31 15:33:00 PDT 2026


On Mon, Mar 30, 2026 at 04:46:32PM +0200, Ard Biesheuvel wrote:
> On arm64, kernel mode NEON executes with preemption enabled, so there is
> no need to chunk the input by hand.
> 
> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>

There's still similar "chunking" in other arm64 code:

    $ git grep -E 'SZ_4K|cond_yield' lib/crypto/arm64
    lib/crypto/arm64/chacha.h:              unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
    lib/crypto/arm64/poly1305.h:                    unsigned int todo = min_t(unsigned int, len, SZ_4K);
    lib/crypto/arm64/sha1-ce-core.S:        cond_yield      1f, x5, x6
    lib/crypto/arm64/sha256-ce.S:   cond_yield      1f, x5, x6
    lib/crypto/arm64/sha3-ce-core.S:        cond_yield 4f, x8, x9
    lib/crypto/arm64/sha512-ce-core.S:      cond_yield      3f, x4, x5

I thought it was still sticking around, despite kernel-mode NEON now
being preemptible on arm64, because of CONFIG_PREEMPT_VOLUNTARY.

However, I see that support for CONFIG_PREEMPT_VOLUNTARY was recently
removed on arm64.  So that's what finally makes this no longer needed,
and we can now clean up these other cases too, right?

(Though, I can't find where the voluntary preemption points actually
were.  So maybe they weren't actually there anyway.)

- Eric



More information about the linux-arm-kernel mailing list