[PATCH v3 1/3] kasan: use separate (un)poison implementation for integrated init
Marco Elver
elver at google.com
Wed May 26 03:12:22 PDT 2021
On Wed, May 12, 2021 at 01:09PM -0700, Peter Collingbourne wrote:
[...]
> +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags);
> +void kasan_free_pages(struct page *page, unsigned int order);
> +
> #else /* CONFIG_KASAN_HW_TAGS */
>
> static inline bool kasan_enabled(void)
> {
> +#ifdef CONFIG_KASAN
> return true;
> +#else
> + return false;
> +#endif
> }
Just
return IS_ENABLED(CONFIG_KASAN);
> static inline bool kasan_has_integrated_init(void)
> @@ -113,8 +113,30 @@ static inline bool kasan_has_integrated_init(void)
> return false;
> }
>
> +static __always_inline void kasan_alloc_pages(struct page *page,
> + unsigned int order, gfp_t flags)
> +{
> + /* Only available for integrated init. */
> + BUILD_BUG();
> +}
> +
> +static __always_inline void kasan_free_pages(struct page *page,
> + unsigned int order)
> +{
> + /* Only available for integrated init. */
> + BUILD_BUG();
> +}
This *should* always work, as long as the compiler optimizes everything
like we expect.
But: In this case, I think this is sign that the interface design can be
improved. Can we just make kasan_{alloc,free}_pages() return a 'bool
__must_check' to indicate if kasan takes care of init?
The variants here would simply return kasan_has_integrated_init().
That way, there'd be no need for the BUILD_BUG()s and the interface
becomes harder to misuse by design.
Also, given that kasan_{alloc,free}_pages() initializes memory, this is
an opportunity to just give them a better name. Perhaps
/* Returns true if KASAN took care of initialization, false otherwise. */
bool __must_check kasan_alloc_pages_try_init(struct page *page, unsigned int order, gfp_t flags);
bool __must_check kasan_free_pages_try_init(struct page *page, unsigned int order);
[...]
> - init = want_init_on_free();
> - if (init && !kasan_has_integrated_init())
> - kernel_init_free_pages(page, 1 << order);
> - kasan_free_nondeferred_pages(page, order, init, fpi_flags);
> + if (kasan_has_integrated_init()) {
> + if (!skip_kasan_poison)
> + kasan_free_pages(page, order);
I think kasan_free_pages() could return a bool, and this would become
if (skip_kasan_poison || !kasan_free_pages(...)) {
...
> + } else {
> + bool init = want_init_on_free();
> +
> + if (init)
> + kernel_init_free_pages(page, 1 << order);
> + if (!skip_kasan_poison)
> + kasan_poison_pages(page, order, init);
> + }
>
> /*
> * arch_free_page() can make the page's contents inaccessible. s390
> @@ -2324,8 +2324,6 @@ static bool check_new_pages(struct page *page, unsigned int order)
> inline void post_alloc_hook(struct page *page, unsigned int order,
> gfp_t gfp_flags)
> {
> - bool init;
> -
> set_page_private(page, 0);
> set_page_refcounted(page);
>
> @@ -2344,10 +2342,16 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
> * kasan_alloc_pages and kernel_init_free_pages must be
> * kept together to avoid discrepancies in behavior.
> */
> - init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
> - kasan_alloc_pages(page, order, init);
> - if (init && !kasan_has_integrated_init())
> - kernel_init_free_pages(page, 1 << order);
> + if (kasan_has_integrated_init()) {
> + kasan_alloc_pages(page, order, gfp_flags);
It looks to me that kasan_alloc_pages() could return a bool, and this
would become
if (!kasan_alloc_pages(...)) {
...
> + } else {
> + bool init =
> + !want_init_on_free() && want_init_on_alloc(gfp_flags);
> +
[ No need for line-break (for cases like this the kernel is fine with up
to 100 cols if it improves readability). ]
> + kasan_unpoison_pages(page, order, init);
> + if (init)
> + kernel_init_free_pages(page, 1 << order);
> + }
Thoughts?
Thanks,
-- Marco
More information about the linux-arm-kernel
mailing list