[PATCH 06/11] change memory_is_poisoned_16 for aligned error

Dmitry Vyukov dvyukov at google.com
Thu Oct 12 00:16:39 PDT 2017


On Thu, Oct 12, 2017 at 1:23 AM, Andrew Morton
<akpm at linux-foundation.org> wrote:
> On Wed, 11 Oct 2017 16:22:22 +0800 Abbott Liu <liuwenliang at huawei.com> wrote:
>
>>  Because arm instruction set don't support access the address which is
>>  not aligned, so must change memory_is_poisoned_16 for arm.
>>
>> ...
>>
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -149,6 +149,25 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
>>       return memory_is_poisoned_1(addr + size - 1);
>>  }
>>
>> +#ifdef CONFIG_ARM
>> +static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> +{
>> +     u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>> +
>> +     if (unlikely(shadow_addr[0] || shadow_addr[1])) return true;
>
> Coding-style is messed up.  Please use scripts/checkpatch.pl.
>
>> +     else {
>> +             /*
>> +              * If two shadow bytes covers 16-byte access, we don't
>> +              * need to do anything more. Otherwise, test the last
>> +              * shadow byte.
>> +              */
>> +             if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>> +                     return false;
>> +             return memory_is_poisoned_1(addr + 15);
>> +     }
>> +}
>> +
>> +#else
>>  static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>  {
>>       u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>> @@ -159,6 +178,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>
>>       return *shadow_addr;
>>  }
>> +#endif
>
> - I don't understand why this is necessary.  memory_is_poisoned_16()
>   already handles unaligned addresses?
>
> - If it's needed on ARM then presumably it will be needed on other
>   architectures, so CONFIG_ARM is insufficiently general.
>
> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>   it would be better to generalize/fix it in some fashion rather than
>   creating a new variant of the function.


Yes, I think it will be better to fix the current function rather then
have 2 slightly different copies with ifdef's.
Will something along these lines work for arm? 16-byte accesses are
not too common, so it should not be a performance problem. And
probably modern compilers can turn 2 1-byte checks into a 2-byte check
where safe (x86).

static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
        u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);

        if (shadow_addr[0] || shadow_addr[1])
                return true;
        /* Unaligned 16-bytes access maps into 3 shadow bytes. */
        if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
                return memory_is_poisoned_1(addr + 15);
        return false;
}



More information about the linux-arm-kernel mailing list