[RFC PATCH v2 4/4] crypto: aes - add generic time invariant AES for CTR/CCM/GCM

Ard Biesheuvel ard.biesheuvel at linaro.org
Wed Feb 1 23:55:11 PST 2017


On 2 February 2017 at 07:48, Ard Biesheuvel <ard.biesheuvel at linaro.org> wrote:
> On 2 February 2017 at 07:38, Eric Biggers <ebiggers3 at gmail.com> wrote:
>> Hi Ard,
>>
>> On Sat, Jan 28, 2017 at 11:33:33PM +0000, Ard Biesheuvel wrote:
>>>
>>> Note that this only implements AES encryption, which is all we need
>>> for CTR and CBC-MAC. AES decryption can easily be implemented in a
>>> similar way, but is significantly more costly.
>>
>> Is the expectation of decryption being more costly the only reason why "aes-ti"
>> couldn't be a "cipher" algorithm, allowing it to automatically be used by the
>> existing templates for CTR, CBC-MAC, CBC, ECB, XTS, CMAC, etc.?
>
> Yes.
>
>> It doesn't seem
>> to do anything expensive on a per-block basis like loading SSE registers, so it
>> seems it would fit better as a "cipher" algorithm if at all possible.  Then
>> there would be no need to implement all these modes yet again.
>>
>
> True.
>
>> Also, what would be the feasibility of simply replacing aes-generic with the
>> time-invariant implementation, rather than offering two implementations and
>> requiring users to choose one, usually without the needed expertise?
>>
>
> Well, it is a policy decision whether you want the best performance or
> reduced correlation between timing and the input, so there is no way
> to make everybody happy by replacing one with the other. But I can
> certainly implement is as a cipher, and we can take the discussion
> from there.
>
>>> +
>>> +/*
>>> + * Emit the sbox as __weak with external linkage to prevent the compiler
>>> + * from doing constant folding on sbox references involving fixed indexes.
>>> + */
>>> +__weak const u8 __cacheline_aligned __aesti_sbox[] = {
>>
>> Did you consider marking it 'volatile' instead?
>>
>
> I did not, and I expect the result to be the same. I can replace it if
> it matters.
>
>>> +static int aesti_set_key(struct aes_ti_ctx *ctx, const u8 *in_key,
>>> +                      unsigned int key_len)
>>> +{
>>> +     struct crypto_aes_ctx rk;
>>> +     int err;
>>> +
>>> +     err = crypto_aes_expand_key(&rk, in_key, key_len);
>>> +     if (err)
>>> +             return err;
>>
>> crypto_aes_expand_key() assumes that the key is u32-aligned; I don't think
>> that's guaranteed here.
>>
>
> No, I don't think so. AFAICT it expects the round keys to be u32
> aligned, which is guaranteed due to the fact that struct
> crypto_aes_ctx encapsulates an array of u32
>

I stand corrected: I misread le32_to_cpu() for get_unaligned_le32()



More information about the linux-arm-kernel mailing list