[PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
Andrey Konovalov
andreyknvl at gmail.com
Fri Sep 10 14:17:38 PDT 2021
On Fri, Sep 10, 2021 at 11:14 PM Peter Collingbourne <pcc at google.com> wrote:
>
> With HW tag-based KASAN, error checks are performed implicitly by the
> load and store instructions in the memcpy implementation. A failed check
> results in tag checks being disabled and execution will keep going. As a
> result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
> test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
> would end up corrupting memory until it hits an inaccessible page and
> causes a kernel panic.
>
> This is a pre-existing issue that was revealed by commit 285133040e6c
> ("arm64: Import latest memcpy()/memmove() implementation") which changed
> the memcpy implementation from using signed comparisons (incorrectly,
> resulting in the memcpy being terminated early for negative sizes)
> to using unsigned comparisons.
>
> It is unclear how this could be handled by memcpy itself in a reasonable
> way. One possibility would be to add an exception handler that would force
> memcpy to return if a tag check fault is detected -- this would make the
> behavior roughly similar to generic and SW tag-based KASAN. However,
> this wouldn't solve the problem for asynchronous mode and also makes
> memcpy behavior inconsistent with manually copying data.
>
> This test was added as a part of a series that taught KASAN to detect
> negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
> detect negative size in memory operation function"). Therefore we
> should keep testing for negative sizes with generic and SW tag-based
> KASAN. But there is some value in testing small memcpy overflows, so
> let's add another test with memcpy that does not destabilize the kernel
> by performing out-of-bounds writes, and run it in all modes.
>
> Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882
> Signed-off-by: Peter Collingbourne <pcc at google.com>
> ---
> lib/test_kasan.c | 18 +++++++++++++++++-
> 1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> index 8835e0784578..aa8e42250219 100644
> --- a/lib/test_kasan.c
> +++ b/lib/test_kasan.c
> @@ -493,7 +493,7 @@ static void kmalloc_oob_in_memset(struct kunit *test)
> kfree(ptr);
> }
>
> -static void kmalloc_memmove_invalid_size(struct kunit *test)
> +static void kmalloc_memmove_negative_size(struct kunit *test)
> {
> char *ptr;
> size_t size = 64;
> @@ -515,6 +515,21 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
> kfree(ptr);
> }
>
> +static void kmalloc_memmove_invalid_size(struct kunit *test)
> +{
> + char *ptr;
> + size_t size = 64;
> + volatile size_t invalid_size = size;
> +
> + ptr = kmalloc(size, GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> +
> + memset((char *)ptr, 0, 64);
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> + kfree(ptr);
> +}
> +
> static void kmalloc_uaf(struct kunit *test)
> {
> char *ptr;
> @@ -1129,6 +1144,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> KUNIT_CASE(kmalloc_oob_memset_4),
> KUNIT_CASE(kmalloc_oob_memset_8),
> KUNIT_CASE(kmalloc_oob_memset_16),
> + KUNIT_CASE(kmalloc_memmove_negative_size),
> KUNIT_CASE(kmalloc_memmove_invalid_size),
> KUNIT_CASE(kmalloc_uaf),
> KUNIT_CASE(kmalloc_uaf_memset),
> --
> 2.33.0.309.g3052b89438-goog
>
Reviewed-by: Andrey Konovalov <andreyknvl at gmail.com>
Thanks!
More information about the linux-arm-kernel
mailing list