[PATCH v3 04/12] mm: Introduce vma_pgtable_walk_{begin|end}()
peterx at redhat.com
peterx at redhat.com
Thu Mar 21 15:07:54 PDT 2024
From: Peter Xu <peterx at redhat.com>
Introduce per-vma begin()/end() helpers for pgtable walks. This is a
preparation work to merge hugetlb pgtable walkers with generic mm.
The helpers need to be called before and after a pgtable walk, will start
to be needed if the pgtable walker code supports hugetlb pages. It's a
hook point for any type of VMA, but for now only hugetlb uses it to
stablize the pgtable pages from getting away (due to possible pmd
unsharing).
Reviewed-by: Christoph Hellwig <hch at infradead.org>
Reviewed-by: Muchun Song <muchun.song at linux.dev>
Signed-off-by: Peter Xu <peterx at redhat.com>
---
include/linux/mm.h | 3 +++
mm/memory.c | 12 ++++++++++++
2 files changed, 15 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8147b1302413..d10eb89f4096 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4198,4 +4198,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
}
+void vma_pgtable_walk_begin(struct vm_area_struct *vma);
+void vma_pgtable_walk_end(struct vm_area_struct *vma);
+
#endif /* _LINUX_MM_H */
diff --git a/mm/memory.c b/mm/memory.c
index 9bce1fa76dd7..4f2caf1c3c4d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6438,3 +6438,15 @@ void ptlock_free(struct ptdesc *ptdesc)
kmem_cache_free(page_ptl_cachep, ptdesc->ptl);
}
#endif
+
+void vma_pgtable_walk_begin(struct vm_area_struct *vma)
+{
+ if (is_vm_hugetlb_page(vma))
+ hugetlb_vma_lock_read(vma);
+}
+
+void vma_pgtable_walk_end(struct vm_area_struct *vma)
+{
+ if (is_vm_hugetlb_page(vma))
+ hugetlb_vma_unlock_read(vma);
+}
--
2.44.0
More information about the linux-riscv
mailing list