[PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs
Robin Murphy
robin.murphy at arm.com
Wed Jun 24 05:45:52 EDT 2020
On 2020-06-24 01:17, Anshuman Khandual wrote:
>
>
> On 06/23/2020 10:10 PM, Robin Murphy wrote:
>> On 2020-06-23 13:48, Anshuman Khandual wrote:
>>>
>>> On 06/23/2020 02:54 PM, kernel test robot wrote:
>>>> 423 /*
>>>> 424 * must be done after arm64_numa_init() which calls numa_init() to
>>>> 425 * initialize node_online_map that gets used in hugetlb_cma_reserve()
>>>> 426 * while allocating required CMA size across online nodes.
>>>> 427 */
>>>> > 428 arm64_hugetlb_cma_reserve();
>>>
>>> Wrapping this call site with CONFIG_HUGETLB_PAGE solves the problem.
>>
>> ...although it might be nicer to include asm/hugetlb.h directly so that you can pick up the stub definition reliably.
>
> Including <asm/hugetlb.h> directly does not solve the problem and
> <linux/hugetlb.h> is no better. arm64_hugetlb_cma_reserve() needs
> protection wrt both CMA and HUGETLB_PAGE. Dropped HUGETLB_PAGE
> assuming it should have been taken care as the stub itself was in
> <asm/hugetlb.h>, which turns out to be not true.
Sure, I wasn't suggesting that the implementation of the header itself
wouldn't need tweaking - the point I was trying to get at is that it's
preferable to have *either* a stub definition in an always-reachable
header, *or* inline #ifdefs around the caller. Mixing both such that
there are 3 or 4 possible combinations just isn't nice to maintain.
Robin.
More information about the linux-arm-kernel
mailing list