[v6 11/15] arm64/kasan: explicitly zero kasan shadow memory
Pasha Tatashin
pasha.tatashin at oracle.com
Tue Aug 8 04:49:22 PDT 2017
Hi Will,
Thank you for looking at this change. What you described was in my
previous iterations of this project.
See for example here: https://lkml.org/lkml/2017/5/5/369
I was asked to remove that flag, and only zero memory in place when
needed. Overall the current approach is better everywhere else in the
kernel, but it adds a little extra code to kasan initialization.
Pasha
On 08/08/2017 05:07 AM, Will Deacon wrote:
> On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:
>> To optimize the performance of struct page initialization,
>> vmemmap_populate() will no longer zero memory.
>>
>> We must explicitly zero the memory that is allocated by vmemmap_populate()
>> for kasan, as this memory does not go through struct page initialization
>> path.
>>
>> Signed-off-by: Pavel Tatashin <pasha.tatashin at oracle.com>
>> Reviewed-by: Steven Sistare <steven.sistare at oracle.com>
>> Reviewed-by: Daniel Jordan <daniel.m.jordan at oracle.com>
>> Reviewed-by: Bob Picco <bob.picco at oracle.com>
>> ---
>> arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 42 insertions(+)
>>
>> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
>> index 81f03959a4ab..e78a9ecbb687 100644
>> --- a/arch/arm64/mm/kasan_init.c
>> +++ b/arch/arm64/mm/kasan_init.c
>> @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
>> set_pgd(pgd_offset_k(start), __pgd(0));
>> }
>>
>> +/*
>> + * Memory that was allocated by vmemmap_populate is not zeroed, so we must
>> + * zero it here explicitly.
>> + */
>> +static void
>> +zero_vmemmap_populated_memory(void)
>> +{
>> + struct memblock_region *reg;
>> + u64 start, end;
>> +
>> + for_each_memblock(memory, reg) {
>> + start = __phys_to_virt(reg->base);
>> + end = __phys_to_virt(reg->base + reg->size);
>> +
>> + if (start >= end)
>> + break;
>> +
>> + start = (u64)kasan_mem_to_shadow((void *)start);
>> + end = (u64)kasan_mem_to_shadow((void *)end);
>> +
>> + /* Round to the start end of the mapped pages */
>> + start = round_down(start, SWAPPER_BLOCK_SIZE);
>> + end = round_up(end, SWAPPER_BLOCK_SIZE);
>> + memset((void *)start, 0, end - start);
>> + }
>> +
>> + start = (u64)kasan_mem_to_shadow(_text);
>> + end = (u64)kasan_mem_to_shadow(_end);
>> +
>> + /* Round to the start end of the mapped pages */
>> + start = round_down(start, SWAPPER_BLOCK_SIZE);
>> + end = round_up(end, SWAPPER_BLOCK_SIZE);
>> + memset((void *)start, 0, end - start);
>> +}
>
> I can't help but think this would be an awful lot nicer if you made
> vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
> implement a version of vmemmap_populate that does the zeroing when we need
> it, without having to duplicate a bunch of the code like this. I think it
> would also be less error-prone, because you wouldn't have to do the
> allocation and the zeroing in two separate steps.
>
> Will
>
More information about the linux-arm-kernel
mailing list