[PATCH] mm: remove zap_page_range and create zap_vma_pages
Palmer Dabbelt
palmer at dabbelt.com
Tue Feb 14 19:19:16 PST 2023
On Tue, 03 Jan 2023 16:27:32 PST (-0800), mike.kravetz at oracle.com wrote:
> zap_page_range was originally designed to unmap pages within an address
> range that could span multiple vmas. While working on [1], it was
> discovered that all callers of zap_page_range pass a range entirely within
> a single vma. In addition, the mmu notification call within zap_page
> range does not correctly handle ranges that span multiple vmas. When
> crossing a vma boundary, a new mmu_notifier_range_init/end call pair
> with the new vma should be made.
>
> Instead of fixing zap_page_range, do the following:
> - Create a new routine zap_vma_pages() that will remove all pages within
> the passed vma. Most users of zap_page_range pass the entire vma and
> can use this new routine.
> - For callers of zap_page_range not passing the entire vma, instead call
> zap_page_range_single().
> - Remove zap_page_range.
>
> [1] https://lore.kernel.org/linux-mm/20221114235507.294320-2-mike.kravetz@oracle.com/
> Suggested-by: Peter Xu <peterx at redhat.com>
> Signed-off-by: Mike Kravetz <mike.kravetz at oracle.com>
[...]
> diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
> index e410275918ac..5c30212d8d1c 100644
> --- a/arch/riscv/kernel/vdso.c
> +++ b/arch/riscv/kernel/vdso.c
> @@ -124,13 +124,11 @@ int vdso_join_timens(struct task_struct *task, struct time_namespace *ns)
> mmap_read_lock(mm);
>
> for_each_vma(vmi, vma) {
> - unsigned long size = vma->vm_end - vma->vm_start;
> -
> if (vma_is_special_mapping(vma, vdso_info.dm))
> - zap_page_range(vma, vma->vm_start, size);
> + zap_vma_pages(vma);
> #ifdef CONFIG_COMPAT
> if (vma_is_special_mapping(vma, compat_vdso_info.dm))
> - zap_page_range(vma, vma->vm_start, size);
> + zap_vma_pages(vma);
> #endif
> }
Acked-by: Palmer Dabbelt <palmer at rivosinc.com> # RISC-V
Thanks!
More information about the linux-riscv
mailing list