[PATCH] arm64: signal: Update sigcontext reservations table

Dave Martin Dave.Martin at arm.com
Wed Jul 31 03:38:55 PDT 2024


On Tue, Jul 30, 2024 at 05:00:17PM +0100, Mark Brown wrote:
> On Tue, Jul 30, 2024 at 04:07:22PM +0100, Dave Martin wrote:
> > On Tue, Jul 30, 2024 at 02:22:47PM +0100, Mark Brown wrote:
> 
> > > Well, it only requires thought if you do something that pays attention
> > > to the signal frame layout - an awful lot of programs simply don't look
> > > at the frame and so don't care.  There are things like userspace threads
> > > which are particularly likely to be impacted but there's also a lot of
> > > code that just handles a signal and returns without ever looking at the
> > > frame.
> 
> > A program can't not pay attention to the sigframe _size_, i.e., even if
> > you ignore the sigcontext, you still have to have allocated your stack
> > big enough for it.
> 
> > That's the fundamental issue here.
> 
> A good percentage of programs manage to just use a default rather then
> ever explicitly specifying or configuring anything themselves - C
> programs will default to RLIMIT_STACK IIRC which is system configured
> and generally set rather high.  It's true that anything that is
> explicitly configuring stack sizes needs to worry about having enough
> stack space for a signal frame on top of whatever else it's doing
> (including anything limiting things system wide) but I'd be a bit
> surprised if it were the common case that things were actually paying
> attention.

That's all true, but even programs that don't explicity work out stack
sizes may be using implicit knowledge, beacuse the developers may have
simple bumped up stack sizes.

Note, RLIMIT_STACK only applies the initial stack of the main thread.
Processes with threads might have many stacks, as might processes with
fibers/coroutines (allocated any old how, and often with no reference
to RLIMIT_STACK).

The aim here is to minimise surprises for code that made reasonable
assumptions at the time it was written, rather than to ensure that
every ancient binary that ever worked by accident still works, no
matter what crazy nonportable shenanigans it gets up to.

> 
> > > > Ideally, the toolchain would mark binaries with the features they are
> > > > compatible with, and try to load only compatible objects into the same
> > > > process.  The ELF properties (as used for BTI etc.) provide a generic
> > > > mechanism for this, but maybe we need to start pushing for labelling
> > > > for other properties too.  The "can it trigger an oversized sigframe"
> > > > property of an arch feature won't be obvious to the toolchain folks.
> 
> > > Hrm.  I can see this being fun with working out how the various
> > > extensions compose with each other and how to turn things that the
> > > toolchain usually wouldn't be aware of on.
> 
> > That's why I went for a simplified model:
> 
> > If a program exercises no opt-ins at all, then the sigframe must fit in
> > MINSIGSTKSZ bytes.
> 
> > If the program exercises any opt-in at all, the sigframe is not
> > guaranteed to fit in MINSIGSTKSZ bytes.  It's then the program's
> > responsibility to pay attention to the real worst-case size advertised
> > in AT_MINSIGSTKSZ in the auxv.
> 
> > As noted in the references, programs built against glibc-2.34 or later
> > with -D_GNU_SOURCE (or -D_DYNAMIC_STACK_SIZE_SOURCE) will actually be
> > using values based on the AT_MINSIGSTKSZ parameter rather than the old
> > constant; uses of MINSIGSTKSZ and SIGSTKSZ that require it to be
> > compile-time constant won't compile.
> 
> > The idea of the table in sigcontext.h was to help us track where opt-
> > ins are needed, and what opt-in conditions exist.  This maybe wasn't as
> > clear as it could have been.
> 
> I think some of it is the strength of the opt ins being considered.

Sorry, what do you mean here?

Cheers
---Dave



More information about the linux-arm-kernel mailing list