[PATCH] crypto - shash: reduce minimum alignment of shash_desc structure
Ard Biesheuvel
ardb at kernel.org
Fri Jan 8 03:36:23 EST 2021
On Thu, 7 Jan 2021 at 20:02, Eric Biggers <ebiggers at kernel.org> wrote:
>
> On Thu, Jan 07, 2021 at 01:41:28PM +0100, Ard Biesheuvel wrote:
> > Unlike many other structure types defined in the crypto API, the
> > 'shash_desc' structure is permitted to live on the stack, which
> > implies its contents may not be accessed by DMA masters. (This is
> > due to the fact that the stack may be located in the vmalloc area,
> > which requires a different virtual-to-physical translation than the
> > one implemented by the DMA subsystem)
> >
> > Our definition of CRYPTO_MINALIGN_ATTR is based on ARCH_KMALLOC_MINALIGN,
> > which may take DMA constraints into account on architectures that support
> > non-cache coherent DMA such as ARM and arm64. In this case, the value is
> > chosen to reflect the largest cacheline size in the system, in order to
> > ensure that explicit cache maintenance as required by non-coherent DMA
> > masters does not affect adjacent, unrelated slab allocations. On arm64,
> > this value is currently set at 128 bytes.
> >
> > This means that applying CRYPTO_MINALIGN_ATTR to struct shash_desc is both
> > unnecessary (as it is never used for DMA), and undesirable, given that it
> > wastes stack space (on arm64, performing the alignment costs 112 bytes in
> > the worst case, and the hole between the 'tfm' and '__ctx' members takes
> > up another 120 bytes, resulting in an increased stack footprint of up to
> > 232 bytes.) So instead, let's switch to the minimum SLAB alignment, which
> > does not take DMA constraints into account.
> >
> > Note that this is a no-op for x86.
> >
> > Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> > ---
> > include/crypto/hash.h | 8 ++++----
> > 1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/crypto/hash.h b/include/crypto/hash.h
> > index af2ff31ff619..13f8a6a54ca8 100644
> > --- a/include/crypto/hash.h
> > +++ b/include/crypto/hash.h
> > @@ -149,7 +149,7 @@ struct ahash_alg {
> >
> > struct shash_desc {
> > struct crypto_shash *tfm;
> > - void *__ctx[] CRYPTO_MINALIGN_ATTR;
> > + void *__ctx[] __aligned(ARCH_SLAB_MINALIGN);
> > };
> >
> > #define HASH_MAX_DIGESTSIZE 64
> > @@ -162,9 +162,9 @@ struct shash_desc {
> >
> > #define HASH_MAX_STATESIZE 512
> >
> > -#define SHASH_DESC_ON_STACK(shash, ctx) \
> > - char __##shash##_desc[sizeof(struct shash_desc) + \
> > - HASH_MAX_DESCSIZE] CRYPTO_MINALIGN_ATTR; \
> > +#define SHASH_DESC_ON_STACK(shash, ctx) \
> > + char __##shash##_desc[sizeof(struct shash_desc) + HASH_MAX_DESCSIZE] \
> > + __aligned(__alignof__(struct shash_desc)); \
> > struct shash_desc *shash = (struct shash_desc *)__##shash##_desc
>
> Looks good to me, but it would be helpful if the comment above the definition of
> CRYPTO_MINALIGN in include/linux/crypto.h was updated.
>
I'd be inclined to update CRYPTO_MINALIGN altogether, given that there
should be very few cases where this actually matter (we've had to fix
some non-coherent DMA issues in the past, but in the general case, all
buffers that are passed to devices for DMA should be described via
scatterlists, and I don't think we permit pointing the scatterlist
into request structures)
More information about the linux-arm-kernel
mailing list