[PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs

Anshuman Khandual anshuman.khandual at arm.com
Tue Jun 23 20:17:30 EDT 2020



On 06/23/2020 10:10 PM, Robin Murphy wrote:
> On 2020-06-23 13:48, Anshuman Khandual wrote:
>>
>> On 06/23/2020 02:54 PM, kernel test robot wrote:
>>>     423        /*
>>>     424         * must be done after arm64_numa_init() which calls numa_init() to
>>>     425         * initialize node_online_map that gets used in hugetlb_cma_reserve()
>>>     426         * while allocating required CMA size across online nodes.
>>>     427         */
>>>   > 428        arm64_hugetlb_cma_reserve();
>>
>> Wrapping this call site with CONFIG_HUGETLB_PAGE solves the problem.
> 
> ...although it might be nicer to include asm/hugetlb.h directly so that you can pick up the stub definition reliably.

Including <asm/hugetlb.h> directly does not solve the problem and
<linux/hugetlb.h> is no better. arm64_hugetlb_cma_reserve() needs
protection wrt both CMA and HUGETLB_PAGE. Dropped HUGETLB_PAGE
assuming it should have been taken care as the stub itself was in
<asm/hugetlb.h>, which turns out to be not true.



More information about the linux-arm-kernel mailing list