[PATCH v4 4/8] crypto: x86/aesni-xctr: Add accelerated implementation of XCTR
Eric Biggers
ebiggers at kernel.org
Thu Apr 14 00:00:51 PDT 2022
A few initial comments, I'll take a closer look at the .S file soon...
On Tue, Apr 12, 2022 at 05:28:12PM +0000, Nathan Huckleberry wrote:
> Add hardware accelerated versions of XCTR for x86-64 CPUs with AESNI
> support. These implementations are modified versions of the CTR
> implementations found in aesni-intel_asm.S and aes_ctrby8_avx-x86_64.S.
>
> More information on XCTR can be found in the HCTR2 paper:
> Length-preserving encryption with HCTR2:
> https://enterprint.iacr.org/2021/1441.pdf
The above link doesn't work.
> +#ifdef __x86_64__
> +/*
> + * void aesni_xctr_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
> + * size_t len, u8 *iv, int byte_ctr)
> + */
This prototype doesn't match the one declared in the .c file.
> +
> +asmlinkage void aes_xctr_enc_128_avx_by8(const u8 *in, u8 *iv, void *keys, u8
> + *out, unsigned int num_bytes, unsigned int byte_ctr);
> +
> +asmlinkage void aes_xctr_enc_192_avx_by8(const u8 *in, u8 *iv, void *keys, u8
> + *out, unsigned int num_bytes, unsigned int byte_ctr);
> +
> +asmlinkage void aes_xctr_enc_256_avx_by8(const u8 *in, u8 *iv, void *keys, u8
> + *out, unsigned int num_bytes, unsigned int byte_ctr);
Please don't have line breaks between parameter types and their names.
These should look like:
asmlinkage void aes_xctr_enc_128_avx_by8(const u8 *in, u8 *iv, void *keys,
u8 *out, unsigned int num_bytes, unsigned int byte_ctr);
Also, why aren't the keys const?
> +static void aesni_xctr_enc_avx_tfm(struct crypto_aes_ctx *ctx, u8 *out, const u8
> + *in, unsigned int len, u8 *iv, unsigned int
> + byte_ctr)
> +{
> + if (ctx->key_length == AES_KEYSIZE_128)
> + aes_xctr_enc_128_avx_by8(in, iv, (void *)ctx, out, len,
> + byte_ctr);
> + else if (ctx->key_length == AES_KEYSIZE_192)
> + aes_xctr_enc_192_avx_by8(in, iv, (void *)ctx, out, len,
> + byte_ctr);
> + else
> + aes_xctr_enc_256_avx_by8(in, iv, (void *)ctx, out, len,
> + byte_ctr);
> +}
Same comments above.
> +static int xctr_crypt(struct skcipher_request *req)
> +{
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct crypto_aes_ctx *ctx = aes_ctx(crypto_skcipher_ctx(tfm));
> + u8 keystream[AES_BLOCK_SIZE];
> + u8 ctr[AES_BLOCK_SIZE];
> + struct skcipher_walk walk;
> + unsigned int nbytes;
> + unsigned int byte_ctr = 0;
> + int err;
> + __le32 ctr32;
> +
> + err = skcipher_walk_virt(&walk, req, false);
> +
> + while ((nbytes = walk.nbytes) > 0) {
> + kernel_fpu_begin();
> + if (nbytes & AES_BLOCK_MASK)
> + static_call(aesni_xctr_enc_tfm)(ctx, walk.dst.virt.addr,
> + walk.src.virt.addr, nbytes & AES_BLOCK_MASK,
> + walk.iv, byte_ctr);
> + nbytes &= ~AES_BLOCK_MASK;
> + byte_ctr += walk.nbytes - nbytes;
> +
> + if (walk.nbytes == walk.total && nbytes > 0) {
> + ctr32 = cpu_to_le32(byte_ctr / AES_BLOCK_SIZE + 1);
> + memcpy(ctr, walk.iv, AES_BLOCK_SIZE);
> + crypto_xor(ctr, (u8 *)&ctr32, sizeof(ctr32));
> + aesni_enc(ctx, keystream, ctr);
> + crypto_xor_cpy(walk.dst.virt.addr + walk.nbytes -
> + nbytes, walk.src.virt.addr + walk.nbytes
> + - nbytes, keystream, nbytes);
> + byte_ctr += nbytes;
> + nbytes = 0;
> + }
For the final block case, it would be a bit simpler to do something like this:
__le32 block[AES_BLOCK_SIZE / sizeof(__le32)]
...
memcpy(block, walk.iv, AES_BLOCK_SIZE);
block[0] ^= cpu_to_le32(1 + byte_ctr / AES_BLOCK_SIZE);
aesni_enc(ctx, (u8 *)block, (u8 *)block);
I.e., have one buffer, use a regular XOR instead of crypto_xor(), and encrypt it
in-place.
> @@ -1162,6 +1249,8 @@ static int __init aesni_init(void)
> /* optimize performance of ctr mode encryption transform */
> static_call_update(aesni_ctr_enc_tfm, aesni_ctr_enc_avx_tfm);
> pr_info("AES CTR mode by8 optimization enabled\n");
> + static_call_update(aesni_xctr_enc_tfm, aesni_xctr_enc_avx_tfm);
> + pr_info("AES XCTR mode by8 optimization enabled\n");
> }
Please don't add the log message above, as it would get printed at every boot-up
on most x86 systems, and it's not important enough for that. The existing
message "AES CTR mode ..." shouldn't really exist in the first place.
- Eric
More information about the linux-arm-kernel
mailing list