[RFC PATCH v2 7/7] crypto: arm64/polyval: Add PMULL accelerated implementation of POLYVAL
Eric Biggers
ebiggers at kernel.org
Fri Feb 18 17:21:17 PST 2022
A lot of the comments I made on the x86_64 version apply to the arm64 version,
but a few more comments below:
On Thu, Feb 10, 2022 at 11:28:12PM +0000, Nathan Huckleberry wrote:
> Add hardware accelerated version of POLYVAL for ARM64 CPUs with
> Crypto Extension support.
Nit: It's "Crypto Extensions", not "Crypto Extension".
> +config CRYPTO_POLYVAL_ARM64_CE
> + tristate "POLYVAL using ARMv8 Crypto Extensions (for HCTR2)"
> + depends on KERNEL_MODE_NEON
> + select CRYPTO_CRYPTD
> + select CRYPTO_HASH
> + select CRYPTO_POLYVAL
CRYPTO_POLYVAL selects CRYPTO_HASH already, so there's no need to select it
here.
> /*
> * Handle any extra blocks before
> * full_stride loop.
> */
> .macro partial_stride
> eor LO.16b, LO.16b, LO.16b
> eor MI.16b, MI.16b, MI.16b
> eor HI.16b, HI.16b, HI.16b
> add KEY_START, x1, #(NUM_PRECOMPUTE_POWERS << 4)
> sub KEY_START, KEY_START, PARTIAL_LEFT, lsl #4
> ld1 {v0.16b}, [KEY_START]
> mov v1.16b, SUM.16b
> karatsuba1 v0 v1
> karatsuba2
> montgomery_reduction
> mov SUM.16b, PH.16b
> eor LO.16b, LO.16b, LO.16b
> eor MI.16b, MI.16b, MI.16b
> eor HI.16b, HI.16b, HI.16b
> mov IND, XZR
> .LloopPartial:
> cmp IND, PARTIAL_LEFT
> bge .LloopExitPartial
>
> sub TMP, IND, PARTIAL_LEFT
>
> cmp TMP, #-4
> bgt .Lgt4Partial
> ld1 {M0.16b, M1.16b, M2.16b, M3.16b}, [x0], #64
> // Clobber key registers
> ld1 {KEY8.16b, KEY7.16b, KEY6.16b, KEY5.16b}, [KEY_START], #64
> karatsuba1 M0 KEY8
> karatsuba1 M1 KEY7
> karatsuba1 M2 KEY6
> karatsuba1 M3 KEY5
> add IND, IND, #4
> b .LoutPartial
>
> .Lgt4Partial:
> cmp TMP, #-3
> bgt .Lgt3Partial
> ld1 {M0.16b, M1.16b, M2.16b}, [x0], #48
> // Clobber key registers
> ld1 {KEY8.16b, KEY7.16b, KEY6.16b}, [KEY_START], #48
> karatsuba1 M0 KEY8
> karatsuba1 M1 KEY7
> karatsuba1 M2 KEY6
> add IND, IND, #3
> b .LoutPartial
>
> .Lgt3Partial:
> cmp TMP, #-2
> bgt .Lgt2Partial
> ld1 {M0.16b, M1.16b}, [x0], #32
> // Clobber key registers
> ld1 {KEY8.16b, KEY7.16b}, [KEY_START], #32
> karatsuba1 M0 KEY8
> karatsuba1 M1 KEY7
> add IND, IND, #2
> b .LoutPartial
>
> .Lgt2Partial:
> ld1 {M0.16b}, [x0], #16
> // Clobber key registers
> ld1 {KEY8.16b}, [KEY_START], #16
> karatsuba1 M0 KEY8
> add IND, IND, #1
> .LoutPartial:
> b .LloopPartial
> .LloopExitPartial:
> karatsuba2
> montgomery_reduction
> eor SUM.16b, SUM.16b, PH.16b
> .endm
This sort of logic for handling different message lengths is necessary, but it's
partly why I think that testing different message lengths is so important --
there could be bugs that are specific to particular message lengths.
With CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y, we do get test coverage from the fuzz
tests that compare implementations to the corresponding generic implementation.
But, it's preferable to not rely on that and have good default test vectors too.
It looks like you already do a relatively good job with the message lengths in
the polyval test vectors. But it might be worth adding a test vector where the
length mod 128 is 112, so that the case where partial_stride processes 7 blocks
is tested. Also one where the message length is greater than 128 (or even 256)
bytes but isn't a multiple of 128, so that the case where *both* partial_stride
and full_stride are executed is tested.
- Eric
More information about the linux-arm-kernel
mailing list