[PATCH] arm64/mm: add fallback option to allocate virtually contiguous memory

Anshuman Khandual anshuman.khandual at arm.com
Thu Sep 10 06:58:43 EDT 2020



On 09/10/2020 01:38 PM, David Hildenbrand wrote:
> On 10.09.20 08:45, Anshuman Khandual wrote:
>> Hello Sudarshan,
>>
>> On 09/10/2020 11:35 AM, Sudarshan Rajagopalan wrote:
>>> When section mappings are enabled, we allocate vmemmap pages from physically
>>> continuous memory of size PMD_SZIE using vmemmap_alloc_block_buf(). Section> mappings are good to reduce TLB pressure. But when system is highly fragmented
>>> and memory blocks are being hot-added at runtime, its possible that such
>>> physically continuous memory allocations can fail. Rather than failing the
>>
>> Did you really see this happen on a system ?
>>
>>> memory hot-add procedure, add a fallback option to allocate vmemmap pages from
>>> discontinuous pages using vmemmap_populate_basepages().
>>
>> Which could lead to a mixed page size mapping in the VMEMMAP area.
> 
> Right, with gives you a slight performance hit - nobody really cares,
> especially if it happens in corner cases only.

On performance impact, will probably let Catalin and others comment from
arm64 platform perspective, because I might not have all information here.
But will do some more audit regarding possible impact of a mixed page size
vmemmap mapping.

> 
> At least x86_64 (see vmemmap_populate_hugepages()) and s390x (added
> recently by me) implement that behavior.
> 
> Assume you run in a virtualized environment where your hypervisor tries
> to do some smart dynamic guest resizing - like monitoring the guest
> memory consumption and adding more memory on demand. You much rather
> want hotadd to succeed (in these corner cases) that failing just because
> you weren't able to grab a huge page in one instance.
> 
> Examples include XEN balloon, Hyper-V balloon, and virtio-mem. We might
> see some of these for arm64 as well (if don't already do).

Makes sense.

> 
>> Allocation failure in vmemmap_populate() should just cleanly fail
>> the memory hot add operation, which can then be retried. Why the
>> retry has to be offloaded to kernel ?
> 
> (not sure what "offloaded to kernel" really means here - add_memory() is

Offloaded here referred to the responsibility to retry or just fallback.
If the situation can be resolved by user retrying hot add operation till
it succeeds, compared to kernel falling back on allocating normal pages.

> also just triggered from the kernel) I disagree, we should try our best
> to add memory and make it available, especially when short on memory
> already.

Okay.



More information about the linux-arm-kernel mailing list