[PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
David Hildenbrand
david at redhat.com
Tue Feb 2 09:42:26 EST 2021
On 29.01.21 09:51, Michal Hocko wrote:
> On Fri 29-01-21 09:21:28, Mike Rapoport wrote:
>> On Thu, Jan 28, 2021 at 02:01:06PM +0100, Michal Hocko wrote:
>>> On Thu 28-01-21 11:22:59, Mike Rapoport wrote:
>>>
>>>> And hugetlb pools may be also depleted by anybody by calling
>>>> mmap(MAP_HUGETLB) and there is no any limiting knob for this, while
>>>> secretmem has RLIMIT_MEMLOCK.
>>>
>>> Yes it can fail. But it would fail at the mmap time when the reservation
>>> fails. Not during the #PF time which can be at any time.
>>
>> It may fail at $PF time as well:
>>
>> hugetlb_fault()
>> hugeltb_no_page()
>> ...
>> alloc_huge_page()
>> alloc_gigantic_page()
>> cma_alloc()
>> -ENOMEM;
>
> I would have to double check. From what I remember cma allocator is an
> optimization to increase chances to allocate hugetlb pages when
> overcommiting because pages should be normally pre-allocated in the pool
> and reserved during mmap time. But even if a hugetlb page is not pre
> allocated then this will get propagated as SIGBUS unless that has
> changed.
It's an optimization to allocate gigantic pages dynamically later (so
not using memblock during boot). Not just for overcommit, but for any
kind of allocation.
The actual allocation from cma should happen when setting nr_pages:
nr_hugepages_store_common()->set_max_huge_pages()->alloc_pool_huge_page()...->alloc_gigantic_page()
The path described above seems to be trying to overcommit gigantic
pages, something that can be expected to SIGBUS. Reservations are
handled via the pre-allocated pool.
--
Thanks,
David / dhildenb
More information about the linux-riscv
mailing list