[PATCH] mm: Add ARCH_FORCE_PAGE_BLOCK_ORDER to select page block order

Zi Yan ziy at nvidia.com
Thu May 1 11:40:26 PDT 2025


On 1 May 2025, at 14:21, Kalesh Singh wrote:

> On Thu, May 1, 2025 at 10:11 AM Juan Yescas <jyescas at google.com> wrote:
>>
>> On Thu, May 1, 2025 at 7:24 AM Zi Yan <ziy at nvidia.com> wrote:
>>>
>>> On 1 May 2025, at 1:25, Juan Yescas wrote:
>>>
>>>> Problem: On large page size configurations (16KiB, 64KiB), the CMA
>>>> alignment requirement (CMA_MIN_ALIGNMENT_BYTES) increases considerably,
>>>> and this causes the CMA reservations to be larger than necessary.
>>>> This means that system will have less available MIGRATE_UNMOVABLE and
>>>> MIGRATE_RECLAIMABLE page blocks since MIGRATE_CMA can't fallback to them.
>>>>
>>>> The CMA_MIN_ALIGNMENT_BYTES increases because it depends on
>>>> MAX_PAGE_ORDER which depends on ARCH_FORCE_MAX_ORDER. The value of
>>>> ARCH_FORCE_MAX_ORDER increases on 16k and 64k kernels.
>>>>
>>>> For example, the CMA alignment requirement when:
>>>>
>>>> - CONFIG_ARCH_FORCE_MAX_ORDER default value is used
>>>> - CONFIG_TRANSPARENT_HUGEPAGE is set:
>>>>
>>>> PAGE_SIZE | MAX_PAGE_ORDER | pageblock_order | CMA_MIN_ALIGNMENT_BYTES
>>>> -----------------------------------------------------------------------
>>>>    4KiB   |      10        |      10         |  4KiB * (2 ^ 10)  =  4MiB
>>>>   16Kib   |      11        |      11         | 16KiB * (2 ^ 11) =  32MiB
>>>>   64KiB   |      13        |      13         | 64KiB * (2 ^ 13) = 512MiB
>>>>
>>>> There are some extreme cases for the CMA alignment requirement when:
>>>>
>>>> - CONFIG_ARCH_FORCE_MAX_ORDER maximum value is set
>>>> - CONFIG_TRANSPARENT_HUGEPAGE is NOT set:
>>>> - CONFIG_HUGETLB_PAGE is NOT set
>>>>
>>>> PAGE_SIZE | MAX_PAGE_ORDER | pageblock_order |  CMA_MIN_ALIGNMENT_BYTES
>>>> ------------------------------------------------------------------------
>>>>    4KiB   |      15        |      15         |  4KiB * (2 ^ 15) = 128MiB
>>>>   16Kib   |      13        |      13         | 16KiB * (2 ^ 13) = 128MiB
>>>>   64KiB   |      13        |      13         | 64KiB * (2 ^ 13) = 512MiB
>>>>
>>>> This affects the CMA reservations for the drivers. If a driver in a
>>>> 4KiB kernel needs 4MiB of CMA memory, in a 16KiB kernel, the minimal
>>>> reservation has to be 32MiB due to the alignment requirements:
>>>>
>>>> reserved-memory {
>>>>     ...
>>>>     cma_test_reserve: cma_test_reserve {
>>>>         compatible = "shared-dma-pool";
>>>>         size = <0x0 0x400000>; /* 4 MiB */
>>>>         ...
>>>>     };
>>>> };
>>>>
>>>> reserved-memory {
>>>>     ...
>>>>     cma_test_reserve: cma_test_reserve {
>>>>         compatible = "shared-dma-pool";
>>>>         size = <0x0 0x2000000>; /* 32 MiB */
>>>>         ...
>>>>     };
>>>> };
>>>>
>>>> Solution: Add a new config ARCH_FORCE_PAGE_BLOCK_ORDER that
>>>> allows to set the page block order. The maximum page block
>>>> order will be given by ARCH_FORCE_MAX_ORDER.
>>>
>>> Why not use a boot time parameter to change page block order?
>>
>> That is a good option. The main tradeoff is:
>>
>> - The bootloader would have to be updated on the devices to pass the right
>> pageblock_order value depending on the kernel page size. Currently,
>> We can boot 4k/16k kernels without any change in the bootloader.
>
> Once we change the page block order we likely need to update the CMA
> reservations in the device tree to match the new min alignment, which
> needs to be recompiled and flashed to the device. So there is likely
> not a significant process saving by making the page block order a boot
> parameter.

Got it. Thank you for the explanation.

--
Best Regards,
Yan, Zi



More information about the linux-arm-kernel mailing list