[RFC PATCH RESEND 21/28] mm: introduce find_and_lock_anon_vma to be used from arch-specific code
Laurent Dufour
ldufour at linux.ibm.com
Fri Sep 9 07:38:27 PDT 2022
Le 01/09/2022 à 19:35, Suren Baghdasaryan a écrit :
> Introduce find_and_lock_anon_vma function to lookup and lock an anonymous
> VMA during page fault handling. When VMA is not found, can't be locked
> or changes after being locked, the function returns NULL. The lookup is
> performed under RCU protection to prevent the found VMA from being
> destroyed before the VMA lock is acquired. VMA lock statistics are
> updated according to the results.
>
> Signed-off-by: Suren Baghdasaryan <surenb at google.com>
> ---
> include/linux/mm.h | 3 +++
> mm/memory.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 48 insertions(+)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 7c3190eaabd7..a3cbaa7b9119 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -684,6 +684,9 @@ static inline void vma_assert_no_reader(struct vm_area_struct *vma)
> vma);
> }
>
> +struct vm_area_struct *find_and_lock_anon_vma(struct mm_struct *mm,
> + unsigned long address);
> +
> #else /* CONFIG_PER_VMA_LOCK */
>
> static inline void vma_init_lock(struct vm_area_struct *vma) {}
> diff --git a/mm/memory.c b/mm/memory.c
> index 29d2f49f922a..bf557f7056de 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5183,6 +5183,51 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
> }
> EXPORT_SYMBOL_GPL(handle_mm_fault);
>
> +#ifdef CONFIG_PER_VMA_LOCK
> +static inline struct vm_area_struct *find_vma_under_rcu(struct mm_struct *mm,
> + unsigned long address)
> +{
> + struct vm_area_struct *vma = __find_vma(mm, address);
> +
> + if (!vma || vma->vm_start > address)
> + return NULL;
> +
> + if (!vma_is_anonymous(vma))
> + return NULL;
> +
It looks to me more natural to first check that the VMA is part of the RB
tree before try read locking it.
> + if (!vma_read_trylock(vma)) {
> + count_vm_vma_lock_event(VMA_LOCK_ABORT);
> + return NULL;
> + }
> +
> + /* Check if the VMA got isolated after we found it */
> + if (RB_EMPTY_NODE(&vma->vm_rb)) {
> + vma_read_unlock(vma);
> + count_vm_vma_lock_event(VMA_LOCK_MISS);
> + return NULL;
> + }
> +
> + return vma;
> +}
> +
> +/*
> + * Lookup and lock and anonymous VMA. Returned VMA is guaranteed to be stable
> + * and not isolated. If the VMA is not found of is being modified the function
> + * returns NULL.
> + */
> +struct vm_area_struct *find_and_lock_anon_vma(struct mm_struct *mm,
> + unsigned long address)
> +{
> + struct vm_area_struct *vma;
> +
> + rcu_read_lock();
> + vma = find_vma_under_rcu(mm, address);
> + rcu_read_unlock();
> +
> + return vma;
> +}
> +#endif /* CONFIG_PER_VMA_LOCK */
> +
> #ifndef __PAGETABLE_P4D_FOLDED
> /*
> * Allocate p4d page table.
More information about the linux-arm-kernel
mailing list