[PATCH v2 05/13] crypto: simd - Update `walksize` in simd skcipher
Eric Biggers
ebiggers at kernel.org
Tue Nov 28 09:22:04 PST 2023
On Tue, Nov 28, 2023 at 01:38:29PM +0800, Jerry Shih wrote:
> On Nov 28, 2023, at 11:58, Eric Biggers <ebiggers at kernel.org> wrote:
> > On Mon, Nov 27, 2023 at 03:06:55PM +0800, Jerry Shih wrote:
> >> The `walksize` assignment is missed in simd skcipher.
> >>
> >> Signed-off-by: Jerry Shih <jerry.shih at sifive.com>
> >> ---
> >> crypto/cryptd.c | 1 +
> >> crypto/simd.c | 1 +
> >> 2 files changed, 2 insertions(+)
> >>
> >> diff --git a/crypto/cryptd.c b/crypto/cryptd.c
> >> index bbcc368b6a55..253d13504ccb 100644
> >> --- a/crypto/cryptd.c
> >> +++ b/crypto/cryptd.c
> >> @@ -405,6 +405,7 @@ static int cryptd_create_skcipher(struct crypto_template *tmpl,
> >> (alg->base.cra_flags & CRYPTO_ALG_INTERNAL);
> >> inst->alg.ivsize = crypto_skcipher_alg_ivsize(alg);
> >> inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
> >> + inst->alg.walksize = crypto_skcipher_alg_walksize(alg);
> >> inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg);
> >> inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg);
> >>
> >> diff --git a/crypto/simd.c b/crypto/simd.c
> >> index edaa479a1ec5..ea0caabf90f1 100644
> >> --- a/crypto/simd.c
> >> +++ b/crypto/simd.c
> >> @@ -181,6 +181,7 @@ struct simd_skcipher_alg *simd_skcipher_create_compat(const char *algname,
> >>
> >> alg->ivsize = ialg->ivsize;
> >> alg->chunksize = ialg->chunksize;
> >> + alg->walksize = ialg->walksize;
> >> alg->min_keysize = ialg->min_keysize;
> >> alg->max_keysize = ialg->max_keysize;
> >
> > What are the consequences of this bug? I wonder if it actually matters? The
> > "inner" algorithm is the one that actually gets used for the "walk", right?
> >
> > - Eric
>
> Without this, we might still use chunksize or cra_blocksize as the walksize
> even though we setup with the larger walksize.
>
> Here is the code for the walksize default value:
> static int skcipher_prepare_alg(struct skcipher_alg *alg)
> {
> ...
> if (!alg->chunksize)
> alg->chunksize = base->cra_blocksize;
> if (!alg->walksize)
> alg->walksize = alg->chunksize;
>
> And we already have the bigger walksize for x86 aes-xts.
> .base = {
> .cra_name = "__xts(aes)",
> ...
> },
> .walksize = 2 * AES_BLOCK_SIZE,
>
> The x86 aes-xts only uses one `walk` to handle the tail elements. It assumes
> that the walksize contains 2 aes blocks. If walksize is not set correctly, maybe
> some tail elements is not processed in simd-cipher mode for x86 aes-xts.
With the SIMD helper there are three "algorithms": the underlying algorithm, the
cryptd algorithm, and the simd algorithm. This patch makes the "walksize"
property be propagated from the underlying algorithm to the cryptd and simd
algorithms. I don't see how that actually makes a difference, since the only
place the skcipher_walk happens is on the underlying algorithm. So it uses the
"walksize" from the underlying algorithm, right?
- Eric
More information about the linux-riscv
mailing list