[PATCH] mm, kasan: don't poison boot memory

David Hildenbrand david at redhat.com
Mon Feb 22 04:52:23 EST 2021


On 20.02.21 00:04, George Kennedy wrote:
> 
> 
> On 2/19/2021 11:45 AM, George Kennedy wrote:
>>
>>
>> On 2/18/2021 7:09 PM, Andrey Konovalov wrote:
>>> On Fri, Feb 19, 2021 at 1:06 AM George Kennedy
>>> <george.kennedy at oracle.com> wrote:
>>>>
>>>>
>>>> On 2/18/2021 3:55 AM, David Hildenbrand wrote:
>>>>> On 17.02.21 21:56, Andrey Konovalov wrote:
>>>>>> During boot, all non-reserved memblock memory is exposed to the buddy
>>>>>> allocator. Poisoning all that memory with KASAN lengthens boot time,
>>>>>> especially on systems with large amount of RAM. This patch makes
>>>>>> page_alloc to not call kasan_free_pages() on all new memory.
>>>>>>
>>>>>> __free_pages_core() is used when exposing fresh memory during system
>>>>>> boot and when onlining memory during hotplug. This patch adds a new
>>>>>> FPI_SKIP_KASAN_POISON flag and passes it to __free_pages_ok() through
>>>>>> free_pages_prepare() from __free_pages_core().
>>>>>>
>>>>>> This has little impact on KASAN memory tracking.
>>>>>>
>>>>>> Assuming that there are no references to newly exposed pages
>>>>>> before they
>>>>>> are ever allocated, there won't be any intended (but buggy)
>>>>>> accesses to
>>>>>> that memory that KASAN would normally detect.
>>>>>>
>>>>>> However, with this patch, KASAN stops detecting wild and large
>>>>>> out-of-bounds accesses that happen to land on a fresh memory page
>>>>>> that
>>>>>> was never allocated. This is taken as an acceptable trade-off.
>>>>>>
>>>>>> All memory allocated normally when the boot is over keeps getting
>>>>>> poisoned as usual.
>>>>>>
>>>>>> Signed-off-by: Andrey Konovalov <andreyknvl at google.com>
>>>>>> Change-Id: Iae6b1e4bb8216955ffc14af255a7eaaa6f35324d
>>>>> Not sure this is the right thing to do, see
>>>>>
>>>>> https://lkml.kernel.org/r/bcf8925d-0949-3fe1-baa8-cc536c529860@oracle.com
>>>>>
>>>>>
>>>>> Reversing the order in which memory gets allocated + used during boot
>>>>> (in a patch by me) might have revealed an invalid memory access during
>>>>> boot.
>>>>>
>>>>> I suspect that that issue would no longer get detected with your
>>>>> patch, as the invalid memory access would simply not get detected.
>>>>> Now, I cannot prove that :)
>>>> Since David's patch we're having trouble with the iBFT ACPI table,
>>>> which
>>>> is mapped in via kmap() - see acpi_map() in "drivers/acpi/osl.c". KASAN
>>>> detects that it is being used after free when ibft_init() accesses the
>>>> iBFT table, but as of yet we can't find where it get's freed (we've
>>>> instrumented calls to kunmap()).
>>> Maybe it doesn't get freed, but what you see is a wild or a large
>>> out-of-bounds access. Since KASAN marks all memory as freed during the
>>> memblock->page_alloc transition, such bugs can manifest as
>>> use-after-frees.
>>
>> It gets freed and re-used. By the time the iBFT table is accessed by
>> ibft_init() the page has been over-written.
>>
>> Setting page flags like the following before the call to kmap()
>> prevents the iBFT table page from being freed:
> 
> Cleaned up version:
> 
> diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
> index 0418feb..8f0a8e7 100644
> --- a/drivers/acpi/osl.c
> +++ b/drivers/acpi/osl.c
> @@ -287,9 +287,12 @@ static void __iomem *acpi_map(acpi_physical_address
> pg_off, unsigned long pg_sz)
> 
>        pfn = pg_off >> PAGE_SHIFT;
>        if (should_use_kmap(pfn)) {
> +        struct page *page = pfn_to_page(pfn);
> +
>            if (pg_sz > PAGE_SIZE)
>                return NULL;
> -        return (void __iomem __force *)kmap(pfn_to_page(pfn));
> +        SetPageReserved(page);
> +        return (void __iomem __force *)kmap(page);
>        } else
>            return acpi_os_ioremap(pg_off, pg_sz);
>    }
> @@ -299,9 +302,12 @@ static void acpi_unmap(acpi_physical_address
> pg_off, void __iomem *vaddr)
>        unsigned long pfn;
> 
>        pfn = pg_off >> PAGE_SHIFT;
> -    if (should_use_kmap(pfn))
> -        kunmap(pfn_to_page(pfn));
> -    else
> +    if (should_use_kmap(pfn)) {
> +        struct page *page = pfn_to_page(pfn);
> +
> +        ClearPageReserved(page);
> +        kunmap(page);
> +    } else
>            iounmap(vaddr);
>    }
> 
> David, the above works, but wondering why it is now necessary. kunmap()
> is not hit. What other ways could a page mapped via kmap() be unmapped?
> 

Let me look into the code ... I have little experience with ACPI 
details, so bear with me.

I assume that acpi_map()/acpi_unmap() map some firmware blob that is 
provided via firmware/bios/... to us.

should_use_kmap() tells us whether
a) we have a "struct page" and should kmap() that one
b) we don't have a "struct page" and should ioremap.

As it is a blob, the firmware should always reserve that memory region 
via memblock (e.g., memblock_reserve()), such that we either
1) don't create a memmap ("struct page") at all (-> case b) )
2) if we have to create e memmap, we mark the page PG_reserved and
    *never* expose it to the buddy (-> case a) )


Are you telling me that in this case we might have a memmap for the HW 
blob that is *not* PG_reserved? In that case it most probably got 
exposed to the buddy where it can happily get allocated/freed.

The latent BUG would be that that blob gets exposed to the system like 
ordinary RAM, and not reserved via memblock early during boot. Assuming 
that blob has a low physical address, with my patch it will get 
allocated/used a lot earlier - which would mean we trigger this latent 
BUG now more easily.

There have been similar latent BUGs on ARM boards that my patch 
discovered where special RAM regions did not get marked as reserved via 
the device tree properly.

Now, this is just a wild guess :) Can you dump the page when mapping 
(before PageReserved()) and when unmapping, to see what the state of 
that memmap is?

-- 
Thanks,

David / dhildenb




More information about the linux-arm-kernel mailing list