[PATCH 1/2] mm: Allow architectures to request 'old' entries when prefaulting
Kirill A. Shutemov
kirill at shutemov.name
Mon Dec 28 17:05:48 EST 2020
On Mon, Dec 28, 2020 at 10:47:36AM -0800, Linus Torvalds wrote:
> On Mon, Dec 28, 2020 at 4:53 AM Kirill A. Shutemov <kirill at shutemov.name> wrote:
> >
> > So far I only found one more pin leak and always-true check. I don't see
> > how can it lead to crash or corruption. Keep looking.
>
> Well, I noticed that the nommu.c version of filemap_map_pages() needs
> fixing, but that's obviously not the case Hugh sees.
>
> No,m I think the problem is the
>
> pte_unmap_unlock(vmf->pte, vmf->ptl);
>
> at the end of filemap_map_pages().
>
> Why?
>
> Because we've been updating vmf->pte as we go along:
>
> vmf->pte += xas.xa_index - last_pgoff;
>
> and I think that by the time we get to that "pte_unmap_unlock()",
> vmf->pte potentially points to past the edge of the page directory.
Well, if it's true we have bigger problem: we set up an pte entry without
relevant PTL.
But I *think* we should be fine here: do_fault_around() limits start_pgoff
and end_pgoff to stay within the page table.
It made mw looking at the code around pte_unmap_unlock() and I think that
the bug is that we have to reset vmf->address and NULLify vmf->pte once we
are done with faultaround:
diff --git a/mm/memory.c b/mm/memory.c
index 829f5056dd1c..405f5c73ce3e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3794,6 +3794,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
update_mmu_tlb(vma, vmf->address, vmf->pte);
pte_unmap_unlock(vmf->pte, vmf->ptl);
+ vmf->address = address;
+ vmf->pte = NULL;
return ret;
}
--
Kirill A. Shutemov
More information about the linux-arm-kernel
mailing list