[PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map
Edgecombe, Rick P
rick.p.edgecombe at intel.com
Sun Oct 25 20:38:32 EDT 2020
On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt at linux.ibm.com>
>
> When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may
> be
> not present in the direct map and has to be explicitly mapped before
> it
> could be copied.
>
> On arm64 it is possible that a page would be removed from the direct
> map
> using set_direct_map_invalid_noflush() but __kernel_map_pages() will
> refuse
> to map this page back if DEBUG_PAGEALLOC is disabled.
It looks to me that arm64 __kernel_map_pages() will still attempt to
map it if rodata_full is true, how does this happen?
> Explicitly use set_direct_map_{default,invalid}_noflush() for
> ARCH_HAS_SET_DIRECT_MAP case and debug_pagealloc_map_pages() for
> DEBUG_PAGEALLOC case.
>
> While on that, rename kernel_map_pages() to hibernate_map_page() and
> drop
> numpages parameter.
>
> Signed-off-by: Mike Rapoport <rppt at linux.ibm.com>
> ---
> kernel/power/snapshot.c | 29 +++++++++++++++++++----------
> 1 file changed, 19 insertions(+), 10 deletions(-)
>
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index fa499466f645..ecb7b32ce77c 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -76,16 +76,25 @@ static inline void
> hibernate_restore_protect_page(void *page_address) {}
> static inline void hibernate_restore_unprotect_page(void
> *page_address) {}
> #endif /* CONFIG_STRICT_KERNEL_RWX && CONFIG_ARCH_HAS_SET_MEMORY */
>
> -#if defined(CONFIG_DEBUG_PAGEALLOC) ||
> defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
> -static inline void
> -kernel_map_pages(struct page *page, int numpages, int enable)
> +static inline void hibernate_map_page(struct page *page, int enable)
> {
> - __kernel_map_pages(page, numpages, enable);
> + if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
> + unsigned long addr = (unsigned long)page_address(page);
> + int ret;
> +
> + if (enable)
> + ret = set_direct_map_default_noflush(page);
> + else
> + ret = set_direct_map_invalid_noflush(page);
> +
> + if (WARN_ON(ret))
> + return;
> +
> + flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> + } else {
> + debug_pagealloc_map_pages(page, 1, enable);
> + }
> }
> -#else
> -static inline void
> -kernel_map_pages(struct page *page, int numpages, int enable) {}
> -#endif
>
> static int swsusp_page_is_free(struct page *);
> static void swsusp_set_page_forbidden(struct page *);
> @@ -1366,9 +1375,9 @@ static void safe_copy_page(void *dst, struct
> page *s_page)
> if (kernel_page_present(s_page)) {
> do_copy_page(dst, page_address(s_page));
> } else {
> - kernel_map_pages(s_page, 1, 1);
> + hibernate_map_page(s_page, 1);
> do_copy_page(dst, page_address(s_page));
> - kernel_map_pages(s_page, 1, 0);
> + hibernate_map_page(s_page, 0);
> }
> }
>
If somehow a page was unmapped such that
set_direct_map_default_noflush() would fail, then this code introduces
a WARN, but it will still try to read the unmapped page. Why not just
have the WARN's inside of __kernel_map_pages() if they fail and then
have a warning for the debug page alloc cases as well? Since logic
around both expects them not to fail.
More information about the linux-riscv
mailing list