[PATCH 0/8] stackleak: fixes and rework

Mark Rutland mark.rutland at arm.com
Mon Apr 25 04:55:55 PDT 2022


This series reworks the stackleak code. The first patch fixes some
latent issues on arm64, and the subsequent patches improve the code to
improve clarity and permit better code generation.

I started working on this as a tangent from rework to arm64's stacktrace
code. Looking at users of the `on_*_stack()` helpers I noticed that the
assembly generated for stackleak was particularly awful as it performed
a lot of redundant work and also called instrumentable code, which isn't
sound.

The first patch fixes the major issues on arm64, and is Cc'd to stable
for backporting.

The second patch is a trivial optimization for when stackleak is
dynamically disabled.

The subsequent patches rework the way stackleak manipulates the stack
boundary values. This is partically for clarity (e.g. with separate
'low' and 'high' boundary variables), and also permits the compiler to
generate more optimal assembly by generating the high and low bounds
from the same base.

Patch 5 changes the way that `current->lowest_stack` is reset prior to
return to userspace. The existing code uses an undocumented offset
relative to the top of the stack which doesn't make much sense (as thie
sometimes falls within the task's pt_regs, or sometimes adds 600+ bytes
to erase upon the next exit to userspace). For now I've removed the
offset entirely.

Patch 7 adds stackleak_erase_on_task_stack() and
stackleak_erase_off_task_stack() that can be used when a caller knows
they're always on or off the task stack respectively, avoiding redundant
logic to check this and generate the high boundary value. On arm64 we
always call stackleak_erase() while on the task stack, so this is used
in patch 8.

Testing the series on arm64 with a QEMU HVF VM on an M1 Macbook Pro with
a few microbenchmarks shows a small but measureable improvement when
stackleak is enabled (relative to v5.18-rc1):

* Calling getpid 1^22 times in a loop (avg 50 runs)
  
  Before: 0.652099387 seconds ( +-  0.13% )
  After:  0.641005661 seconds ( +-  0.13% )

  ~1.7% time decrease

* perf bench sched pipe (single run)

  Before: 2.138 seconds total
  After:  2.118 seconds total

  ~0.93% time decrease

I also tested "perf bench sched messaging" but the noise outweighed the
difference.

While the improvement is small, I think the improvement to clarity and
code generation is a win regardless.

Thanks,
Mark.

Mark Rutland (8):
  arm64: stackleak: fix current_top_of_stack()
  stackleak: move skip_erasing() check earlier
  stackleak: rework stack low bound handling
  stackleak: clarify variable names
  stackleak: rework stack high bound handling
  stackleak: remove redundant check
  stackleak: add on/off stack variants
  arm64: entry: use stackleak_erase_on_task_stack()

 arch/arm64/include/asm/processor.h | 10 ++-
 arch/arm64/kernel/entry.S          |  2 +-
 include/linux/stackleak.h          | 29 ++++++++-
 kernel/stackleak.c                 | 99 ++++++++++++++++++++----------
 4 files changed, 98 insertions(+), 42 deletions(-)

-- 
2.30.2




More information about the linux-arm-kernel mailing list