[PATCH v2 2/5] mm/swapin: Retry swapin by VMA lock if the lock was released for I/O
Barry Song (Xiaomi)
baohua at kernel.org
Wed Apr 29 21:04:24 PDT 2026
If the current do_swap_page() took the per-VMA lock and we dropped it only
to wait for I/O completion (e.g., use folio_wait_locked()), then when
do_swap_page() is retried after the I/O completes, it should still qualify
for the per-VMA-lock path.
Tested-by: Wang Lian <wanglian at kylinos.cn>
Tested-by: Kunwu Chan <chentao at kylinos.cn>
Reviewed-by: Wang Lian <lianux.mm at gmail.com>
Reviewed-by: Kunwu Chan <kunwu.chan at gmail.com>
Signed-off-by: Barry Song (Xiaomi) <baohua at kernel.org>
---
mm/memory.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 199214f8de08..00ee1599d637 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4791,6 +4791,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
unsigned long page_idx;
unsigned long address;
pte_t *ptep;
+ bool retry_by_vma_lock = false;
if (!pte_unmap_same(vmf))
goto out;
@@ -4896,8 +4897,13 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
swapcache = folio;
ret |= folio_lock_or_retry(folio, vmf);
- if (ret & VM_FAULT_RETRY)
+ if (ret & VM_FAULT_RETRY) {
+ if (fault_flag_allow_retry_first(vmf->flags) &&
+ !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT) &&
+ (vmf->flags & FAULT_FLAG_VMA_LOCK))
+ retry_by_vma_lock = true;
goto out_release;
+ }
page = folio_file_page(folio, swp_offset(entry));
/*
@@ -5182,7 +5188,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
}
if (si)
put_swap_device(si);
- return ret;
+ return ret | (retry_by_vma_lock ? VM_FAULT_RETRY_VMA : 0);
}
static bool pte_range_none(pte_t *pte, int nr_pages)
--
2.39.3 (Apple Git-146)
More information about the linux-riscv
mailing list