[PATCH v2 00/19] crypto: arm64 - play nice with CONFIG_PREEMPT

Ard Biesheuvel ard.biesheuvel at linaro.org
Mon Dec 4 04:26:26 PST 2017


This is a followup 'crypto: arm64 - disable NEON across scatterwalk API
calls' sent out last Friday.

As reported by Sebastian, the way the arm64 NEON crypto code currently
keeps kernel mode NEON enabled across calls into skcipher_walk_xxx() is
causing problems with RT builds, given that the skcipher walk API may
allocate and free temporary buffers it uses to present the input and
output arrays to the crypto algorithm in blocksize sized chunks (where
blocksize is the natural blocksize of the crypto algorithm), and doing
so with NEON enabled means we're alloc/free'ing memory with preemption
disabled.

This was deliberate: when this code was introduced, each kernel_neon_begin()
and kernel_neon_end() call incurred a fixed penalty of storing resp.
loading the contents of all NEON registers to/from memory, and so doing
it less often had an obvious performance benefit. However, in the mean time,
we have refactored the core kernel mode NEON code, and now kernel_neon_begin()
only incurs this penalty the first time it is called after entering the kernel,
and the NEON register restore is deferred until returning to userland. This
means pulling those calls into the loops that iterate over the input/output
of the crypto algorithm is not a big deal anymore (although there are some
places in the code where we relied on the NEON registers retaining their
values between calls)

So let's clean this up for arm64: update the NEON based skcipher drivers to
no longer keep the NEON enabled when calling into the skcipher walk API.

As pointed out by Peter, this only solves part of the problem. So let's
tackle it more thoroughly, and update the algorithms to test the NEED_RESCHED
flag each time after processing a fixed chunk of input. An attempt was made
to align the different algorithms with regards to how much work such a fixed
chunk entails, i.e., yielding every block for an algorithm that operates on
16 byte blocks at < 1 cycles per byte seems rather pointless.

Changes since v1:
- add CRC-T10DIF test vector (#1)
- stop using GFP_ATOMIC in scatterwalk API calls, now that they are executed
  with preemption enabled (#2 - #6)
- do some preparatory refactoring on the AES block mode code (#7 - #9)
- add yield patches (#10 - #18)
- add test patch (#19) - DO NOT MERGE

Cc: Dave Martin <Dave.Martin at arm.com>
Cc: Russell King - ARM Linux <linux at armlinux.org.uk>
Cc: Sebastian Andrzej Siewior <bigeasy at linutronix.de>
Cc: Mark Rutland <mark.rutland at arm.com>
Cc: linux-rt-users at vger.kernel.org
Cc: Peter Zijlstra <peterz at infradead.org>
Cc: Catalin Marinas <catalin.marinas at arm.com>
Cc: Will Deacon <will.deacon at arm.com>
Cc: Steven Rostedt <rostedt at goodmis.org>
Cc: Thomas Gleixner <tglx at linutronix.de>

Ard Biesheuvel (19):
  crypto: testmgr - add a new test case for CRC-T10DIF
  crypto: arm64/aes-ce-ccm - move kernel mode neon en/disable into loop
  crypto: arm64/aes-blk - move kernel mode neon en/disable into loop
  crypto: arm64/aes-bs - move kernel mode neon en/disable into loop
  crypto: arm64/chacha20 - move kernel mode neon en/disable into loop
  crypto: arm64/ghash - move kernel mode neon en/disable into loop
  crypto: arm64/aes-blk - remove configurable interleave
  crypto: arm64/aes-blk - add 4 way interleave to CBC encrypt path
  crypto: arm64/aes-blk - add 4 way interleave to CBC-MAC encrypt path
  crypto: arm64/sha256-neon - play nice with CONFIG_PREEMPT kernels
  arm64: assembler: add macro to conditionally yield the NEON under
    PREEMPT
  crypto: arm64/sha1-ce - yield every 8 blocks of input
  crypto: arm64/sha2-ce - yield every 8 blocks of input
  crypto: arm64/aes-blk - yield after processing each 64 bytes of input
  crypto: arm64/aes-bs - yield after processing each 128 bytes of input
  crypto: arm64/aes-ghash - yield after processing fixed number of
    blocks
  crypto: arm64/crc32-ce - yield NEON every 16 blocks of input
  crypto: arm64/crct10dif-ce - yield NEON every 8 blocks of input
  DO NOT MERGE

 arch/arm64/crypto/Makefile             |   3 -
 arch/arm64/crypto/aes-ce-ccm-glue.c    |  47 +-
 arch/arm64/crypto/aes-ce.S             |  17 +-
 arch/arm64/crypto/aes-glue.c           |  95 ++-
 arch/arm64/crypto/aes-modes.S          | 624 ++++++++++----------
 arch/arm64/crypto/aes-neon.S           |   2 +
 arch/arm64/crypto/aes-neonbs-core.S    | 317 ++++++----
 arch/arm64/crypto/aes-neonbs-glue.c    |  48 +-
 arch/arm64/crypto/chacha20-neon-glue.c |  12 +-
 arch/arm64/crypto/crc32-ce-core.S      |  55 +-
 arch/arm64/crypto/crct10dif-ce-core.S  |  39 +-
 arch/arm64/crypto/ghash-ce-core.S      | 128 ++--
 arch/arm64/crypto/ghash-ce-glue.c      |  17 +-
 arch/arm64/crypto/sha1-ce-core.S       |  45 +-
 arch/arm64/crypto/sha2-ce-core.S       |  40 +-
 arch/arm64/crypto/sha256-glue.c        |  36 +-
 arch/arm64/include/asm/assembler.h     |  83 +++
 crypto/testmgr.h                       | 259 ++++++++
 18 files changed, 1231 insertions(+), 636 deletions(-)

-- 
2.11.0




More information about the linux-arm-kernel mailing list