[PATCH v4 12/12] mm: SLUB hardened usercopy support

Rik van Riel riel at redhat.com
Mon Jul 25 17:22:00 PDT 2016


On Mon, 2016-07-25 at 16:29 -0700, Laura Abbott wrote:
> On 07/25/2016 02:42 PM, Rik van Riel wrote:
> > On Mon, 2016-07-25 at 12:16 -0700, Laura Abbott wrote:
> > > On 07/20/2016 01:27 PM, Kees Cook wrote:
> > > > Under CONFIG_HARDENED_USERCOPY, this adds object size checking
> > > > to
> > > > the
> > > > SLUB allocator to catch any copies that may span objects.
> > > > Includes
> > > > a
> > > > redzone handling fix discovered by Michael Ellerman.
> > > > 
> > > > Based on code from PaX and grsecurity.
> > > > 
> > > > Signed-off-by: Kees Cook <keescook at chromium.org>
> > > > Tested-by: Michael Ellerman <mpe at ellerman.id.au>
> > > > ---
> > > >  init/Kconfig |  1 +
> > > >  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
> > > >  2 files changed, 37 insertions(+)
> > > > 
> > > > diff --git a/init/Kconfig b/init/Kconfig
> > > > index 798c2020ee7c..1c4711819dfd 100644
> > > > --- a/init/Kconfig
> > > > +++ b/init/Kconfig
> > > > @@ -1765,6 +1765,7 @@ config SLAB
> > > > 
> > > >  config SLUB
> > > >  	bool "SLUB (Unqueued Allocator)"
> > > > +	select HAVE_HARDENED_USERCOPY_ALLOCATOR
> > > >  	help
> > > >  	   SLUB is a slab allocator that minimizes cache line
> > > > usage
> > > >  	   instead of managing queues of cached objects (SLAB
> > > > approach).
> > > > diff --git a/mm/slub.c b/mm/slub.c
> > > > index 825ff4505336..7dee3d9a5843 100644
> > > > --- a/mm/slub.c
> > > > +++ b/mm/slub.c
> > > > @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t
> > > > flags, int node)
> > > >  EXPORT_SYMBOL(__kmalloc_node);
> > > >  #endif
> > > > 
> > > > +#ifdef CONFIG_HARDENED_USERCOPY
> > > > +/*
> > > > + * Rejects objects that are incorrectly sized.
> > > > + *
> > > > + * Returns NULL if check passes, otherwise const char * to
> > > > name of
> > > > cache
> > > > + * to indicate an error.
> > > > + */
> > > > +const char *__check_heap_object(const void *ptr, unsigned long
> > > > n,
> > > > +				struct page *page)
> > > > +{
> > > > +	struct kmem_cache *s;
> > > > +	unsigned long offset;
> > > > +	size_t object_size;
> > > > +
> > > > +	/* Find object and usable object size. */
> > > > +	s = page->slab_cache;
> > > > +	object_size = slab_ksize(s);
> > > > +
> > > > +	/* Find offset within object. */
> > > > +	offset = (ptr - page_address(page)) % s->size;
> > > > +
> > > > +	/* Adjust for redzone and reject if within the
> > > > redzone. */
> > > > +	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
> > > > +		if (offset < s->red_left_pad)
> > > > +			return s->name;
> > > > +		offset -= s->red_left_pad;
> > > > +	}
> > > > +
> > > > +	/* Allow address range falling entirely within object
> > > > size. */
> > > > +	if (offset <= object_size && n <= object_size -
> > > > offset)
> > > > +		return NULL;
> > > > +
> > > > +	return s->name;
> > > > +}
> > > > +#endif /* CONFIG_HARDENED_USERCOPY */
> > > > +
> > > 
> > > I compared this against what check_valid_pointer does for
> > > SLUB_DEBUG
> > > checking. I was hoping we could utilize that function to avoid
> > > duplication but a) __check_heap_object needs to allow accesses
> > > anywhere
> > > in the object, not just the beginning b) accessing page->objects
> > > is racy without the addition of locking in SLUB_DEBUG.
> > > 
> > > Still, the ptr < page_address(page) check from
> > > __check_heap_object
> > > would
> > > be good to add to avoid generating garbage large offsets and
> > > trying
> > > to
> > > infer C math.
> > > 
> > > diff --git a/mm/slub.c b/mm/slub.c
> > > index 7dee3d9..5370e4f 100644
> > > --- a/mm/slub.c
> > > +++ b/mm/slub.c
> > > @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void
> > > *ptr, unsigned long n,
> > >          s = page->slab_cache;
> > >          object_size = slab_ksize(s);
> > > 
> > > +       if (ptr < page_address(page))
> > > +               return s->name;
> > > +
> > >          /* Find offset within object. */
> > >          offset = (ptr - page_address(page)) % s->size;
> > > 
> > 
> > I don't get it, isn't that already guaranteed because we
> > look for the page that ptr is in, before __check_heap_object
> > is called?
> > 
> > Specifically, in patch 3/12:
> > 
> > +       page = virt_to_head_page(ptr);
> > +
> > +       /* Check slab allocator for flags and size. */
> > +       if (PageSlab(page))
> > +               return __check_heap_object(ptr, n, page);
> > 
> > How can that generate a ptr that is not inside the page?
> > 
> > What am I overlooking?  And, should it be in the changelog or
> > a comment? :)
> > 
> 
> 
> I ran into the subtraction issue when the vmalloc detection wasn't
> working on ARM64, somehow virt_to_head_page turned into a page
> that happened to have PageSlab set. I agree if everything is working
> properly this is redundant but given the type of feature this is, a
> little bit of redundancy against a system running off into the weeds
> or bad patches might be warranted.
> 
That's fair.  I have no objection to the check, but would
like to see it documented, since it does look a little out
of place.

-- 

All Rights Reversed.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: This is a digitally signed message part
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160725/3298aac7/attachment.sig>


More information about the linux-arm-kernel mailing list