[PATCH v4 12/12] mm: SLUB hardened usercopy support
Laura Abbott
labbott at redhat.com
Mon Jul 25 17:54:25 PDT 2016
On 07/25/2016 01:45 PM, Kees Cook wrote:
> On Mon, Jul 25, 2016 at 12:16 PM, Laura Abbott <labbott at redhat.com> wrote:
>> On 07/20/2016 01:27 PM, Kees Cook wrote:
>>>
>>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
>>> SLUB allocator to catch any copies that may span objects. Includes a
>>> redzone handling fix discovered by Michael Ellerman.
>>>
>>> Based on code from PaX and grsecurity.
>>>
>>> Signed-off-by: Kees Cook <keescook at chromium.org>
>>> Tested-by: Michael Ellerman <mpe at ellerman.id.au>
>>> ---
>>> init/Kconfig | 1 +
>>> mm/slub.c | 36 ++++++++++++++++++++++++++++++++++++
>>> 2 files changed, 37 insertions(+)
>>>
>>> diff --git a/init/Kconfig b/init/Kconfig
>>> index 798c2020ee7c..1c4711819dfd 100644
>>> --- a/init/Kconfig
>>> +++ b/init/Kconfig
>>> @@ -1765,6 +1765,7 @@ config SLAB
>>>
>>> config SLUB
>>> bool "SLUB (Unqueued Allocator)"
>>> + select HAVE_HARDENED_USERCOPY_ALLOCATOR
>>> help
>>> SLUB is a slab allocator that minimizes cache line usage
>>> instead of managing queues of cached objects (SLAB approach).
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index 825ff4505336..7dee3d9a5843 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int
>>> node)
>>> EXPORT_SYMBOL(__kmalloc_node);
>>> #endif
>>>
>>> +#ifdef CONFIG_HARDENED_USERCOPY
>>> +/*
>>> + * Rejects objects that are incorrectly sized.
>>> + *
>>> + * Returns NULL if check passes, otherwise const char * to name of cache
>>> + * to indicate an error.
>>> + */
>>> +const char *__check_heap_object(const void *ptr, unsigned long n,
>>> + struct page *page)
>>> +{
>>> + struct kmem_cache *s;
>>> + unsigned long offset;
>>> + size_t object_size;
>>> +
>>> + /* Find object and usable object size. */
>>> + s = page->slab_cache;
>>> + object_size = slab_ksize(s);
>>> +
>>> + /* Find offset within object. */
>>> + offset = (ptr - page_address(page)) % s->size;
>>> +
>>> + /* Adjust for redzone and reject if within the redzone. */
>>> + if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
>>> + if (offset < s->red_left_pad)
>>> + return s->name;
>>> + offset -= s->red_left_pad;
>>> + }
>>> +
>>> + /* Allow address range falling entirely within object size. */
>>> + if (offset <= object_size && n <= object_size - offset)
>>> + return NULL;
>>> +
>>> + return s->name;
>>> +}
>>> +#endif /* CONFIG_HARDENED_USERCOPY */
>>> +
>>
>>
>> I compared this against what check_valid_pointer does for SLUB_DEBUG
>> checking. I was hoping we could utilize that function to avoid
>> duplication but a) __check_heap_object needs to allow accesses anywhere
>> in the object, not just the beginning b) accessing page->objects
>> is racy without the addition of locking in SLUB_DEBUG.
>>
>> Still, the ptr < page_address(page) check from __check_heap_object would
>> be good to add to avoid generating garbage large offsets and trying to
>> infer C math.
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 7dee3d9..5370e4f 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr,
>> unsigned long n,
>> s = page->slab_cache;
>> object_size = slab_ksize(s);
>> + if (ptr < page_address(page))
>> + return s->name;
>> +
>> /* Find offset within object. */
>> offset = (ptr - page_address(page)) % s->size;
>>
>> With that, you can add
>>
>> Reviwed-by: Laura Abbott <labbott at redhat.com>
>
> Cool, I'll add that.
>
> Should I add your reviewed-by for this patch only or for the whole series?
>
> Thanks!
>
> -Kees
>
Just this patch for now, I'm working through a couple of others
>>
>>> static size_t __ksize(const void *object)
>>> {
>>> struct page *page;
>>>
>>
>> Thanks,
>> Laura
>
>
>
More information about the linux-arm-kernel
mailing list