[PATCH v5 3/3] arm64: hibernate: Support DEBUG_PAGEALLOC
Catalin Marinas
catalin.marinas at arm.com
Tue Aug 23 10:06:57 PDT 2016
On Tue, Aug 23, 2016 at 02:33:04PM +0100, James Morse wrote:
> On 22/08/16 19:51, Catalin Marinas wrote:
> > On Mon, Aug 22, 2016 at 06:35:19PM +0100, James Morse wrote:
> >> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> >> index b4082017c4cb..da4470de1807 100644
> >> --- a/arch/arm64/kernel/hibernate.c
> >> +++ b/arch/arm64/kernel/hibernate.c
> >> @@ -235,6 +235,7 @@ out:
> >> return rc;
> >> }
> >>
> >> +#define dcache_clean_range(start, end) __flush_dcache_area(start, (end - start))
> >>
> >> int swsusp_arch_suspend(void)
> >> {
> >> @@ -252,8 +253,14 @@ int swsusp_arch_suspend(void)
> >> if (__cpu_suspend_enter(&state)) {
> >> ret = swsusp_save();
> >> } else {
> >> - /* Clean kernel to PoC for secondary core startup */
> >> - __flush_dcache_area(LMADDR(KERNEL_START), KERNEL_END - KERNEL_START);
> >> + /* Clean kernel core startup/idle code to PoC*/
> >> + dcache_clean_range(__mmuoff_text_start, __mmuoff_text_end);
> >> + dcache_clean_range(__mmuoff_data_start, __mmuoff_data_end);
> >> + dcache_clean_range(__idmap_text_start, __idmap_text_end);
> >> +
> >> + /* Clean kvm setup code to PoC? */
> >> + if (el2_reset_needed())
> >> + dcache_clean_range(__hyp_idmap_text_start, __hyp_idmap_text_end);
> >>
> >> /*
> >> * Tell the hibernation core that we've just restored
> >> @@ -269,6 +276,32 @@ int swsusp_arch_suspend(void)
> >> return ret;
> >> }
> >>
> >> +static void _copy_pte(pte_t *dst_pte, pte_t *src_pte, unsigned long addr)
> >> +{
> >> + unsigned long pfn = virt_to_pfn(addr);
[...]
> > Something I missed in the original hibernation support but it may look
> > better if you have something like:
> >
> > pte_t pte = *src_pte;
>
> Sure,
>
> >> +
> >> + if (pte_valid(*src_pte)) {
> >> + /*
> >> + * Resume will overwrite areas that may be marked
> >> + * read only (code, rodata). Clear the RDONLY bit from
> >> + * the temporary mappings we use during restore.
> >> + */
> >> + set_pte(dst_pte, __pte(pte_val(*src_pte) & ~PTE_RDONLY));
> >
> > and here:
> >
> > set_pte(dst_pte, pte_mkwrite(pte));
>
> pte_write() doesn't clear the PTE_RDONLY flag.
> Should it be changed?
I missed this. The PTE_RDONLY flag is supposed to be cleared by
set_pte_at() but we don't call this function here. Well, I guess it's
better as you originally wrote it, so no need to change.
> >> + } else if (debug_pagealloc_enabled()) {
> >> + /*
> >> + * debug_pagealloc may have removed the PTE_VALID bit if
> >> + * the page isn't in use by the resume kernel. It may have
> >> + * been in use by the original kernel, in which case we need
> >> + * to put it back in our copy to do the restore.
> >> + *
> >> + * Check for mappable memory that gives us a translation
> >> + * like part of the linear map.
> >> + */
> >> + if (pfn_valid(pfn) && pte_pfn(*src_pte) == pfn)
> >
> > Is there a case where this condition is false?
>
> Hopefully not, but I tried to avoid marking whatever happens to be there as
> valid. This is as paranoid as I can make it: checking the pfn should be mapped,
> and that the output-address part of the record is correct.
>
> If you're happy with the assumption that only valid records ever appear in the
> linear map page tables, (and that anything marked not-valid is a result of
> debug_pagealloc), then we can change this to !pte_none().
I think we can go for pte_none() but with a
BUG_ON(!pfn_valid(pte_pfn(*src_pte))).
> >> + set_pte(dst_pte, __pte((pte_val(*src_pte) & ~PTE_RDONLY) | PTE_VALID));
> >
> > With some more macros:
> >
> > set_pte(dst_pte, pte_mkwrite(pte_mkpresent(pte)))
> >
> > (pte_mkpresent() needs to be added)
>
> >> + }
> >> +}
> >> +
> >> static int copy_pte(pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long start,
> >> unsigned long end)
> >> {
> >> @@ -284,13 +317,7 @@ static int copy_pte(pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long start,
> >>
> >> src_pte = pte_offset_kernel(src_pmd, start);
> >> do {
> >> - if (!pte_none(*src_pte))
> >
> > You seem to no longer check for pte_none(). Is this not needed or
> > covered by the pte_pfn() != pfn check above?
>
> A bit of both:
> Previously this copied over any values it found. _copy_pte() now copies valid
> values, and if debug_pagealloc is turned on, tries to guess whether
> the non-valid values should be copied and marked valid.
I haven't looked in detail at debug_pagealloc but if it only ever clears
the valid bit, we shouldn't have an issue.
> >> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> >> index ca6d268e3313..b6c0da84258c 100644
> >> --- a/arch/arm64/mm/pageattr.c
> >> +++ b/arch/arm64/mm/pageattr.c
> >> @@ -139,4 +139,42 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
> >> __pgprot(0),
> >> __pgprot(PTE_VALID));
> >> }
> >> -#endif
> >> +#ifdef CONFIG_HIBERNATION
> >> +/*
> >> + * When built with CONFIG_DEBUG_PAGEALLOC and CONFIG_HIBERNATION, this function
> >> + * is used to determine if a linear map page has been marked as not-present by
> >> + * CONFIG_DEBUG_PAGEALLOC. Walk the page table and check the PTE_VALID bit.
> >> + * This is based on kern_addr_valid(), which almost does what we need.
> >> + */
> >> +bool kernel_page_present(struct page *page)
> >> +{
> >> + pgd_t *pgd;
> >> + pud_t *pud;
> >> + pmd_t *pmd;
> >> + pte_t *pte;
> >> + unsigned long addr = (unsigned long)page_address(page);
> >> +
> >> + pgd = pgd_offset_k(addr);
> >> + if (pgd_none(*pgd))
> >> + return false;
> >> +
> >> + pud = pud_offset(pgd, addr);
> >> + if (pud_none(*pud))
> >> + return false;
> >> + if (pud_sect(*pud))
> >> + return true;
> >
> > This wouldn't normally guarantee "present" but I don't think we ever
> > have a non-present section mapping for the kernel (we do for user
> > though). You may want to add a comment.
>
> Sure.
>
> Just in case I've totally miss-understood:
> > * Because this is only called on the kernel linar map we don't need to
> > * use p?d_present() to check for PROT_NONE regions, as these don't occur
> > * in the linear map.
Something simpler not to confuse the reader with PTE_PROT_NONE:
/*
* Because this is only called on the kernel linear map,
* p?d_sect() implies p?d_present(). When debug_pagealloc is
* enabled, sections mappings are disabled.
*/
--
Catalin
More information about the linux-arm-kernel
mailing list