[PATCH v4 01/11] mm: add Kernel Electric-Fence infrastructure

Dmitry Vyukov dvyukov at google.com
Fri Oct 2 10:22:59 EDT 2020


On Fri, Oct 2, 2020 at 9:54 AM Jann Horn <jannh at google.com> wrote:
>
> On Fri, Oct 2, 2020 at 8:33 AM Jann Horn <jannh at google.com> wrote:
> > On Tue, Sep 29, 2020 at 3:38 PM Marco Elver <elver at google.com> wrote:
> > > This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a
> > > low-overhead sampling-based memory safety error detector of heap
> > > use-after-free, invalid-free, and out-of-bounds access errors.
> > >
> > > KFENCE is designed to be enabled in production kernels, and has near
> > > zero performance overhead. Compared to KASAN, KFENCE trades performance
> > > for precision. The main motivation behind KFENCE's design, is that with
> > > enough total uptime KFENCE will detect bugs in code paths not typically
> > > exercised by non-production test workloads. One way to quickly achieve a
> > > large enough total uptime is when the tool is deployed across a large
> > > fleet of machines.
> [...]
> > > +/*
> > > + * The pool of pages used for guard pages and objects. If supported, allocated
> > > + * statically, so that is_kfence_address() avoids a pointer load, and simply
> > > + * compares against a constant address. Assume that if KFENCE is compiled into
> > > + * the kernel, it is usually enabled, and the space is to be allocated one way
> > > + * or another.
> > > + */
> >
> > If this actually brings a performance win, the proper way to do this
> > would probably be to implement this as generic kernel infrastructure
> > that makes the compiler emit large-offset relocations (either through
> > compiler support or using inline asm statements that move an immediate
> > into a register output and register the location in a special section,
> > kinda like how e.g. static keys work) and patches them at boot time,
> > or something like that - there are other places in the kernel where
> > very hot code uses global pointers that are only ever written once
> > during boot, e.g. the dentry cache of the VFS and the futex hash
> > table. Those are probably far hotter than the kfence code.
> >
> > While I understand that that goes beyond the scope of this project, it
> > might be something to work on going forward - this kind of
> > special-case logic that turns the kernel data section into heap memory
> > would not be needed if we had that kind of infrastructure.
>
> After thinking about it a bit more, I'm not even convinced that this
> is a net positive in terms of overall performance - while it allows
> you to avoid one level of indirection in some parts of kfence, that
> kfence code by design only runs pretty infrequently. And to enable
> this indirection avoidance, your x86 arch_kfence_initialize_pool() is
> shattering potentially unrelated hugepages in the kernel data section,
> which might increase the TLB pressure (and therefore the number of
> memory loads that have to fall back to slow page walks) in code that
> is much hotter than yours.
>
> And if this indirection is a real performance problem, that problem
> would be many times worse in the VFS and the futex subsystem, so
> developing a more generic framework for doing this cleanly would be
> far more important than designing special-case code to allow kfence to
> do this.
>
> And from what I've seen, a non-trivial chunk of the code in this
> series, especially the arch/ parts, is only necessary to enable this
> microoptimization.
>
> Do you have performance numbers or a description of why you believe
> that this part of kfence is exceptionally performance-sensitive? If
> not, it might be a good idea to remove this optimization, at least for
> the initial version of this code. (And even if the optimization is
> worthwhile, it might be a better idea to go for the generic version
> immediately.)

This check is very hot, it happens on every free. For every freed
object we need to understand if it belongs to KFENCE or not.

The generic framework for this already exists -- you simply create a
global variable ;)
KFENCE needs the range to be covered by struct page's and that's what
creates problems for arm64. But I would assume most other users don't
need that.



More information about the linux-arm-kernel mailing list