[PATCH v3 1/8] crypto: xctr - Add XCTR support

Eric Biggers ebiggers at kernel.org
Mon Mar 21 22:23:11 PDT 2022


On Tue, Mar 15, 2022 at 11:00:28PM +0000, Nathan Huckleberry wrote:
> Add a generic implementation of XCTR mode as a template.  XCTR is a
> blockcipher mode similar to CTR mode.  XCTR uses XORs and little-endian
> addition rather than big-endian arithmetic which has two advantages:  It
> is slightly faster on little-endian CPUs and it is less likely to be
> implemented incorrect since integer overflows are not possible on
> practical input sizes.  XCTR is used as a component to implement HCTR2.
> 
> More information on XCTR mode can be found in the HCTR2 paper:
> https://eprint.iacr.org/2021/1441.pdf
> 
> Signed-off-by: Nathan Huckleberry <nhuck at google.com>

Looks good, feel free to add:

Reviewed-by: Eric Biggers <ebiggers at google.com>

A few minor nits below:

> +// Limited to 16-byte blocks for simplicity
> +#define XCTR_BLOCKSIZE 16
> +
> +static void crypto_xctr_crypt_final(struct skcipher_walk *walk,
> +				   struct crypto_cipher *tfm, u32 byte_ctr)
> +{
> +	u8 keystream[XCTR_BLOCKSIZE];
> +	u8 *src = walk->src.virt.addr;

Use 'const u8 *src'

> +static int crypto_xctr_crypt_segment(struct skcipher_walk *walk,
> +				    struct crypto_cipher *tfm, u32 byte_ctr)
> +{
> +	void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
> +		   crypto_cipher_alg(tfm)->cia_encrypt;
> +	u8 *src = walk->src.virt.addr;

Likewise, 'const u8 *src'

> +	u8 *dst = walk->dst.virt.addr;
> +	unsigned int nbytes = walk->nbytes;
> +	__le32 ctr32 = cpu_to_le32(byte_ctr / XCTR_BLOCKSIZE + 1);
> +
> +	do {
> +		/* create keystream */
> +		crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));
> +		fn(crypto_cipher_tfm(tfm), dst, walk->iv);
> +		crypto_xor(dst, src, XCTR_BLOCKSIZE);
> +		crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));

The comment "/* create keystream /*" is a bit misleading, since the part of the
code that it describes isn't just creating the keystream, but also XOR'ing it
with the data.  It would be better to just remove that comment.

> +
> +		ctr32 = cpu_to_le32(le32_to_cpu(ctr32) + 1);

This could use le32_add_cpu().

> +
> +		src += XCTR_BLOCKSIZE;
> +		dst += XCTR_BLOCKSIZE;
> +	} while ((nbytes -= XCTR_BLOCKSIZE) >= XCTR_BLOCKSIZE);
> +
> +	return nbytes;
> +}
> +
> +static int crypto_xctr_crypt_inplace(struct skcipher_walk *walk,
> +				    struct crypto_cipher *tfm, u32 byte_ctr)
> +{
> +	void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
> +		   crypto_cipher_alg(tfm)->cia_encrypt;
> +	unsigned long alignmask = crypto_cipher_alignmask(tfm);
> +	unsigned int nbytes = walk->nbytes;
> +	u8 *src = walk->src.virt.addr;

Perhaps call this 'data' instead of 'src', since here it's both the source and
destination?

> +	u8 tmp[XCTR_BLOCKSIZE + MAX_CIPHER_ALIGNMASK];
> +	u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
> +	__le32 ctr32 = cpu_to_le32(byte_ctr / XCTR_BLOCKSIZE + 1);
> +
> +	do {
> +		/* create keystream */

Likewise, remove or clarify the '/* create keystream */' comment.

> +		crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));
> +		fn(crypto_cipher_tfm(tfm), keystream, walk->iv);
> +		crypto_xor(src, keystream, XCTR_BLOCKSIZE);
> +		crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));
> +
> +		ctr32 = cpu_to_le32(le32_to_cpu(ctr32) + 1);

Likewise, le32_add_cpu().

- Eric



More information about the linux-arm-kernel mailing list