[PATCH v5 3/8] crypto: hctr2 - Add HCTR2 support
Eric Biggers
ebiggers at kernel.org
Sun May 1 11:33:26 PDT 2022
On Wed, Apr 27, 2022 at 12:37:54AM +0000, Nathan Huckleberry wrote:
> +/* The input data for each HCTR2 hash step begins with a 16-byte block that
> + * contains the tweak length and a flag that indicates whether the input is evenly
> + * divisible into blocks. Since this implementation only supports one tweak
> + * length, we precompute the two hash states resulting from hashing the two
> + * possible values of this initial block. This reduces by one block the amount of
> + * data that needs to be hashed for each encryption/decryption
> + *
> + * These precomputed hashes are stored in hctr2_tfm_ctx.
> + */
Block comments should look like this:
/*
* text
*/
i.e. there should be a newline after the "/*"
> + memset(tctx->L, 0, sizeof(tctx->L));
> + memset(hbar, 0, sizeof(hbar));
> + tctx->L[0] = 0x01;
> + crypto_cipher_encrypt_one(tctx->blockcipher, tctx->L, tctx->L);
> + crypto_cipher_encrypt_one(tctx->blockcipher, hbar, hbar);
This would be easier to read if the computations of hbar and L were separated:
memset(hbar, 0, sizeof(hbar));
crypto_cipher_encrypt_one(tctx->blockcipher, hbar, hbar);
memset(tctx->L, 0, sizeof(tctx->L));
tctx->L[0] = 0x01;
crypto_cipher_encrypt_one(tctx->blockcipher, tctx->L, tctx->L);
> +static int hctr2_hash_message(struct skcipher_request *req,
> + struct scatterlist *sgl,
> + u8 digest[POLYVAL_DIGEST_SIZE])
> +{
> + u8 padding[BLOCKCIPHER_BLOCK_SIZE];
> + struct hctr2_request_ctx *rctx = skcipher_request_ctx(req);
> + struct shash_desc *hash_desc = &rctx->u.hash_desc;
> + const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
> + struct sg_mapping_iter miter;
> + unsigned int remainder = bulk_len % BLOCKCIPHER_BLOCK_SIZE;
> + int err, i;
> + int n = 0;
> +
> + sg_miter_start(&miter, sgl, sg_nents(sgl),
> + SG_MITER_FROM_SG | SG_MITER_ATOMIC);
> + for (i = 0; i < bulk_len; i += n) {
> + sg_miter_next(&miter);
> + n = min_t(unsigned int, miter.length, bulk_len - i);
> + err = crypto_shash_update(hash_desc, miter.addr, n);
> + if (err)
> + break;
> + }
> + sg_miter_stop(&miter);
> +
> + if (err)
> + return err;
There's actually an uninitialized variable bug here. If bulk_len==0, then 'err'
never gets initialized before being checked. I'm surprised this doesn't cause a
compiler warning, but it doesn't! 'err' needs to be initialized to 0.
> +
> + if (remainder) {
> + memset(padding, 0, BLOCKCIPHER_BLOCK_SIZE);
> + padding[0] = 0x01;
'padding' can be static const:
static const u8 padding[BLOCKCIPHER_BLOCK_SIZE] = { 0x1 };
> + subreq_size = max(sizeof_field(struct hctr2_request_ctx, u.hash_desc) +
> + crypto_shash_descsize(polyval), sizeof_field(struct
> + hctr2_request_ctx, u.xctr_req) +
> + crypto_skcipher_reqsize(xctr));
This is a little hard to read; it would be better if the sizeof_field()'s were
aligned:
subreq_size = max(sizeof_field(struct hctr2_request_ctx, u.hash_desc) +
crypto_shash_descsize(polyval),
sizeof_field(struct hctr2_request_ctx, u.xctr_req) +
crypto_skcipher_reqsize(xctr));
Other than that, everything looks good. The only thing that really has to be
fixed is the uninitialized variable. After that feel free to add:
Reviewed-by: Eric Biggers <ebiggers at google.com>
- Eric
More information about the linux-arm-kernel
mailing list