[PATCH v2 07/13] RISC-V: crypto: add accelerated AES-CBC/CTR/ECB/XTS implementations
Jerry Shih
jerry.shih at sifive.com
Sat Dec 2 05:20:42 PST 2023
On Nov 30, 2023, at 04:16, Eric Biggers <ebiggers at kernel.org> wrote:
> On Wed, Nov 29, 2023 at 03:57:25PM +0800, Jerry Shih wrote:
>> On Nov 28, 2023, at 12:07, Eric Biggers <ebiggers at kernel.org> wrote:
>>> On Mon, Nov 27, 2023 at 03:06:57PM +0800, Jerry Shih wrote:
>>>> +typedef void (*aes_xts_func)(const u8 *in, u8 *out, size_t length,
>>>> + const struct crypto_aes_ctx *key, u8 *iv,
>>>> + int update_iv);
>>>
>>> There's no need for this indirection, because the function pointer can only have
>>> one value.
>>>
>>> Note also that when Control Flow Integrity is enabled, assembly functions can
>>> only be called indirectly when they use SYM_TYPED_FUNC_START. That's another
>>> reason to avoid indirect calls that aren't actually necessary.
>>
>> We have two function pointers for encryption and decryption.
>> static int xts_encrypt(struct skcipher_request *req)
>> {
>> return xts_crypt(req, rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt);
>> }
>>
>> static int xts_decrypt(struct skcipher_request *req)
>> {
>> return xts_crypt(req, rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt);
>> }
>> The enc and dec path could be folded together into `xts_crypt()`, but we will have
>> additional branches for enc/decryption path if we don't want to have the indirect calls.
>> Use `SYM_TYPED_FUNC_START` in asm might be better.
>>
>
> Right. Normal branches are still more efficient and straightforward than
> indirect calls, though, and they don't need any special considerations for CFI.
> So I'd just add a 'bool encrypt' or 'bool decrypt' argument to xts_crypt(), and
> make xts_crypt() call the appropriate assembly function based on that.
Fixed.
The xts_crypt() now has an additional bool argument for enc/decryption.
>>> Did you consider writing xts_crypt() the way that arm64 and x86 do it? The
>>> above seems to reinvent sort of the same thing from first principles. I'm
>>> wondering if you should just copy the existing approach for now. Then there
>>> would be no need to add the scatterwalk_next() function, and also the handling
>>> of inputs that don't need ciphertext stealing would be a bit more streamlined.
>>
>> I will check the arm and x86's implementations.
>> But the `scatterwalk_next()` proposed in this series does the same thing as the
>> call `scatterwalk_ffwd()` in arm and x86's implementations.
>> The scatterwalk_ffwd() iterates from the beginning of scatterlist(O(n)), but the
>> scatterwalk_next() is just iterates from the end point of the last used
>> scatterlist(O(1)).
>
> Sure, but your scatterwalk_next() only matters when there are multiple
> scatterlist entries and the AES-XTS message length isn't a multiple of the AES
> block size. That's not an important case, so there's little need to
> micro-optimize it. The case that actually matters for AES-XTS is a single-entry
> scatterlist containing a whole number of AES blocks.
The v3 patch will remove the `scatterwalk_next()` and use `scatterwalk_ffwd()`
instead.
-Jerry
More information about the linux-riscv
mailing list