[RFC PATCH] mm/page_alloc: Add PCP list for THP CMA

Juan Yescas jyescas at google.com
Mon Aug 4 18:24:51 PDT 2025


On Mon, Aug 4, 2025 at 12:00 PM Zi Yan <ziy at nvidia.com> wrote:
>
> On 4 Aug 2025, at 14:49, David Hildenbrand wrote:
>
> > On 04.08.25 20:20, Juan Yescas wrote:
> >> Hi David/Zi,
> >>
> >> Is there any reason why the MIGRATE_CMA pages are not in the PCP lists?
> >>
> >> There are many devices that need fast allocation of MIGRATE_CMA pages,
> >> and they have to get them from the buddy allocator, which is a bit
> >> slower in comparison to the PCP lists.
> >>
> >> We also have cases where the MIGRATE_CMA memory requirements are big.
> >> For example, GPUs need MIGRATE_CMA memory in the ranges of 30MiB to 500MiBs.
> >> These cases would benefit if we have THPs for CMAs.
> >>
> >> Could we add the support for MIGRATE_CMA pages on the PCP and THP lists?
> >
> > Remember how CMA memory is used:
> >
> > The owner allocates it through cma_alloc() and friends, where the CMA allocator will try allocating *specific physical memory regions* using alloc_contig_range(). It doesn't just go ahead and pick a random CMA page from the buddy (or PCP) lists. Doesn't work (just imagine having different CMA areas etc).
>
> Yeah, unless some code is relying on gfp_to_alloc_flags_cma() to get ALLOC_CMA
> to try to get CMA pages from buddy.
>
> >
> > Anybody else is free to use CMA pages for MOVABLE allocations. So we treat them as being MOVABLE on the PCP.
> >
> > Having a separate CMA PCP list doesn't solve or speedup anything, really.
>
> It can be slower when small CMA pages are on PCP lists and large CMA pages
> cannot be allocated, one needs to drain PCP lists. This assumes the code is
> trying to get CMA pages from buddy, which is not how CMA memory is designed
> to be used like David mentioned above.
>

Thanks Zi for confirming!

> >
> > I still have no clue what this patch here tried to solve: it doesn't make any sense.
>
>
> Best Regards,
> Yan, Zi



More information about the Linux-mediatek mailing list