[PATCH] kasan: fix per-page tags for non-page_alloc pages

Marco Elver elver at google.com
Thu Mar 11 15:17:43 GMT 2021


On Thu, 11 Mar 2021 at 16:11, Andrey Konovalov <andreyknvl at google.com> wrote:
>
> To allow performing tag checks on page_alloc addresses obtained via
> page_address(), tag-based KASAN modes store tags for page_alloc
> allocations in page->flags.
>
> Currently, the default tag value stored in page->flags is 0x00.
> Therefore, page_address() returns a 0x00ffff... address for pages
> that were not allocated via page_alloc.
>
> This might cause problems. A particular case we encountered is a conflict
> with KFENCE. If a KFENCE-allocated slab object is being freed via
> kfree(page_address(page) + offset), the address passed to kfree() will
> get tagged with 0x00 (as slab pages keep the default per-page tags).
> This leads to is_kfence_address() check failing, and a KFENCE object
> ending up in normal slab freelist, which causes memory corruptions.
>
> This patch changes the way KASAN stores tag in page-flags: they are now
> stored xor'ed with 0xff. This way, KASAN doesn't need to initialize
> per-page flags for every created page, which might be slow.
>
> With this change, page_address() returns natively-tagged (with 0xff)
> pointers for pages that didn't have tags set explicitly.
>
> This patch fixes the encountered conflict with KFENCE and prevents more
> similar issues that can occur in the future.
>
> Fixes: 2813b9c02962 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc")
> Cc: stable at vger.kernel.org
> Signed-off-by: Andrey Konovalov <andreyknvl at google.com>

Reviewed-by: Marco Elver <elver at google.com>

Thank you!

> ---
>  include/linux/mm.h | 18 +++++++++++++++---
>  1 file changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 77e64e3eac80..c45c28f094a7 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1440,16 +1440,28 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
>
>  #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
>
> +/*
> + * KASAN per-page tags are stored xor'ed with 0xff. This allows to avoid
> + * setting tags for all pages to native kernel tag value 0xff, as the default
> + * value 0x00 maps to 0xff.
> + */
> +
>  static inline u8 page_kasan_tag(const struct page *page)
>  {
> -       if (kasan_enabled())
> -               return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> -       return 0xff;
> +       u8 tag = 0xff;
> +
> +       if (kasan_enabled()) {
> +               tag = (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> +               tag ^= 0xff;
> +       }
> +
> +       return tag;
>  }
>
>  static inline void page_kasan_tag_set(struct page *page, u8 tag)
>  {
>         if (kasan_enabled()) {
> +               tag ^= 0xff;
>                 page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
>                 page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
>         }
> --
> 2.31.0.rc2.261.g7f71774620-goog
>



More information about the linux-arm-kernel mailing list