[PATCH v3 1/2] kho: add support for preserving vmalloc allocations

Mike Rapoport rppt at kernel.org
Mon Sep 15 07:01:01 PDT 2025


On Mon, Sep 08, 2025 at 11:14:23AM -0300, Jason Gunthorpe wrote:
> On Mon, Sep 08, 2025 at 01:35:27PM +0300, Mike Rapoport wrote:
> > +static struct kho_vmalloc_chunk *new_vmalloc_chunk(struct kho_vmalloc_chunk *cur)
> > +{
> > +	struct kho_vmalloc_chunk *chunk;
> > +	int err;
> > +
> > +	chunk = kzalloc(PAGE_SIZE, GFP_KERNEL);
> > +	if (!chunk)
> > +		return NULL;
> > +
> > +	err = kho_preserve_phys(virt_to_phys(chunk), PAGE_SIZE);
> > +	if (err)
> > +		goto err_free;
> 
> kzalloc() cannot be preserved, the only thing we support today is
> alloc_page(), so this code pattern shouldn't exist.
 
kzalloc(PAGE_SIZE) can be preserved, it's page aligned and we don't have to
restore it into a slab cache. But this maybe indeed confusing for those who
copy paste the code, so I'll change it.

> Call alloc_page() and use a kho_preserve_page/folio() like the luo
> patches were doing. The pattern seems common it probably needs a small
> alloc/free helper.
> 
> > +	for (int i = 0; i < vm->nr_pages; i += (1 << order)) {
> > +		phys_addr_t phys = page_to_phys(vm->pages[i]);
> > +
> > +		err = __kho_preserve_order(track, PHYS_PFN(phys), order);
> > +		if (err)
> > +			goto err_free;
> 
> I think you should make a helper inline to document what is happening here:
> 
> /*
>  * Preserve a contiguous aligned list of order 0 pages that aggregate
>  * to a higher order allocation. Must be restored using
>  * kho_restore_page() on each order 0 page.
>  */
> kho_preserve_pages(page, order);

Maybe.
 
> Jason

-- 
Sincerely yours,
Mike.



More information about the kexec mailing list