[PATCH v8 1/2] kho: fix deferred initialization of scratch areas
Michał Cłapiński
mclapinski at google.com
Mon Apr 20 06:11:03 PDT 2026
On Thu, Apr 16, 2026 at 6:13 PM Mike Rapoport <rppt at kernel.org> wrote:
>
> On Thu, Apr 16, 2026 at 05:06:10PM +0200, Michał Cłapiński wrote:
> > On Thu, Apr 16, 2026 at 4:45 PM Mike Rapoport <rppt at kernel.org> wrote:
> > >
> > > Hi Michal,
> > >
> > > On Thu, Apr 16, 2026 at 01:06:53PM +0200, Michal Clapinski wrote:
> > > > @@ -2262,6 +2253,12 @@ static void __init memmap_init_reserved_range(phys_addr_t start,
> > > > * access it yet.
> > > > */
> > > > __SetPageReserved(page);
> > > > +
> > > > +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH
> > >
> > > No need for #ifdef here, there's a stub returning false for
> > > CONFIG_MEMBLOCK_KHO_SCRATCH=n case.
> >
> > In all 3 places the #ifdef is there because MIGRATE_CMA might be
> > undefined. I already broke mm-new branch in the past because of that.
>
> Hmm, that hurts :/
>
> The best I can think of is to add a static inline in memblock.h and ifdefs
> around it.
Sorry, I don't understand what you mean. What would that static inline contain?
> > > > + if (memblock_is_kho_scratch_memory(PFN_PHYS(pfn)) &&
> > > > + pageblock_aligned(pfn))
> > > > + init_pageblock_migratetype(page, MIGRATE_CMA, false);
> > > > +#endif
> > > > }
> > > > }
>
> --
> Sincerely yours,
> Mike.
More information about the kexec
mailing list