[PATCH v2 2/2] kho: simplify page initialization in kho_restore_page()
Mike Rapoport
rppt at kernel.org
Tue Jan 20 05:05:03 PST 2026
On Fri, Jan 16, 2026 at 11:22:15AM +0000, Pratyush Yadav wrote:
> When restoring a page (from kho_restore_pages()) or folio (from
> kho_restore_folio()), KHO must initialize the struct page. The
> initialization differs slightly depending on if a folio is requested or
> a set of 0-order pages is requested.
>
> Conceptually, it is quite simple to understand. When restoring 0-order
> pages, each page gets a refcount of 1 and that's it. When restoring a
> folio, head page gets a refcount of 1 and tail pages get 0.
>
> kho_restore_page() tries to combine the two separate initialization flow
> into one piece of code. While it works fine, it is more complicated to
> read than it needs to be. Make the code simpler by splitting the two
> initalization paths into two separate functions. This improves
> readability by clearly showing how each type must be initialized.
>
> Signed-off-by: Pratyush Yadav <pratyush at kernel.org>
Reviewed-by: Mike Rapoport (Microsoft) <rppt at kernel.org>
> ---
>
> Changes in v2:
> - Use unsigned long for nr_pages.
>
> kernel/liveupdate/kexec_handover.c | 40 +++++++++++++++++++-----------
> 1 file changed, 26 insertions(+), 14 deletions(-)
>
> diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
> index 709484fbf9fd..92da76977684 100644
> --- a/kernel/liveupdate/kexec_handover.c
> +++ b/kernel/liveupdate/kexec_handover.c
> @@ -219,11 +219,32 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn,
> return 0;
> }
>
> +/* For physically contiguous 0-order pages. */
> +static void kho_init_pages(struct page *page, unsigned long nr_pages)
> +{
> + for (unsigned long i = 0; i < nr_pages; i++)
> + set_page_count(page + i, 1);
> +}
> +
> +static void kho_init_folio(struct page *page, unsigned int order)
> +{
> + unsigned long nr_pages = (1 << order);
> +
> + /* Head page gets refcount of 1. */
> + set_page_count(page, 1);
> +
> + /* For higher order folios, tail pages get a page count of zero. */
> + for (unsigned long i = 1; i < nr_pages; i++)
> + set_page_count(page + i, 0);
> +
> + if (order > 0)
> + prep_compound_page(page, order);
> +}
> +
> static struct page *kho_restore_page(phys_addr_t phys, bool is_folio)
> {
> struct page *page = pfn_to_online_page(PHYS_PFN(phys));
> unsigned long nr_pages;
> - unsigned int ref_cnt;
> union kho_page_info info;
>
> if (!page)
> @@ -241,20 +262,11 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio)
>
> /* Clear private to make sure later restores on this page error out. */
> page->private = 0;
> - /* Head page gets refcount of 1. */
> - set_page_count(page, 1);
> -
> - /*
> - * For higher order folios, tail pages get a page count of zero.
> - * For physically contiguous order-0 pages every pages gets a page
> - * count of 1
> - */
> - ref_cnt = is_folio ? 0 : 1;
> - for (unsigned long i = 1; i < nr_pages; i++)
> - set_page_count(page + i, ref_cnt);
>
> - if (is_folio && info.order)
> - prep_compound_page(page, info.order);
> + if (is_folio)
> + kho_init_folio(page, info.order);
> + else
> + kho_init_pages(page, nr_pages);
>
> adjust_managed_page_count(page, nr_pages);
> return page;
> --
> 2.52.0.457.g6b5491de43-goog
>
--
Sincerely yours,
Mike.
More information about the kexec
mailing list