[PATCH v8 5/9] seccomp: split mode set routines
Oleg Nesterov
oleg at redhat.com
Fri Jun 27 12:55:59 PDT 2014
On 06/27, Andy Lutomirski wrote:
>
> On Fri, Jun 27, 2014 at 12:27 PM, Oleg Nesterov <oleg at redhat.com> wrote:
> > On 06/27, Kees Cook wrote:
> >>
> >> It looks like SMP ARM issues dsb for rmb, which seems a bit expensive.
> >> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0204g/CIHJFGFE.htm
> >>
> >> ...
> >>
> >> I really want to avoid adding anything to the secure_computing()
> >> execution path. :(
> >
> > I must have missed something but I do not understand your concerns.
> >
> > __secure_computing() is not trivial, and we are going to execute the
> > filters. Do you really think rmb() can add the noticeable difference?
> >
> > Not to mention that we can only get here if we take the slow syscall
> > enter path due to TIF_SECCOMP...
> >
>
> On my box, with my fancy multi-phase seccomp patches, the total
> seccomp overhead for a very short filter is about 13ns. Adding a full
> barrier would add several ns, I think.
I am just curious, does this 13ns overhead include the penalty from the
slow syscall enter path triggered by TIF_SECCOMP ?
> Admittedly, this is x86, not ARM, so comparisons here are completely
> bogus. And that read memory barrier doesn't even need an instruction
> on x86. But still, let's try to keep this fast.
Well, I am not going to insist...
But imo it would be better to make it correct in a most simple way, then
we can optimize this code and see if there is a noticeable difference.
Not only we can change the ordering, we can remove the BUG_ON's and just
accept the fact the __secure_computing() can race with sys_seccomp() from
another thread.
If nothing else, it would be much simpler to discuss this patch if it comes
as a separate change.
Oleg.
More information about the linux-arm-kernel
mailing list