[RFC PATCH 3/3] mm/vmalloc.c: change _vm_unmap_aliases() to do purge firstly
Baoquan He
bhe at redhat.com
Fri May 19 05:03:09 PDT 2023
After vb_free() invocation, the va will be purged and put into purge
tree/list if the entire vmap_block is dirty. If not entirely dirty, the
vmap_block is still in percpu vmap_block_queue list, just like below two
graphs:
(1)
|-----|------------|-----------|-------|
|dirty|still mapped| dirty | free |
2)
|------------------------------|-------|
| dirty | free |
In the current _vm_unmap_aliases(), to reclaim those unmapped range and
flush, it will iterate percpu vbq to calculate the range from vmap_block
like above two cases. Then call purge_fragmented_blocks_allcpus()
to purge the vmap_block in case 2 since no mapping exists right now,
and put these purged vmap_block va into purge tree/list. Then in
__purge_vmap_area_lazy(), it will continue calculating the flush range
from purge list. Obviously, this will take vmap_block va in the 2nd case
into account twice.
So here just move purge_fragmented_blocks_allcpus() up to purge the
vmap_block va of case 2 firstly, then only need iterate and count in
the dirty range in above 1st case. With the change, counting in the
dirty region of vmap_block in 1st case is now inside vmap_purge_lock
protection region, which makes the flush range calculation more
reasonable and accurate by avoiding concurrent operation in other cpu.
And also rename _vm_unmap_aliases() to vm_unmap_aliases(), since no
other caller except of the old vm_unmap_aliases().
Signed-off-by: Baoquan He <bhe at redhat.com>
---
mm/vmalloc.c | 45 ++++++++++++++++++++-------------------------
1 file changed, 20 insertions(+), 25 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 87134dd8abc3..9f7cbd6182ad 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2236,8 +2236,23 @@ static void vb_free(unsigned long addr, unsigned long size)
spin_unlock(&vb->lock);
}
-static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush)
+/**
+ * vm_unmap_aliases - unmap outstanding lazy aliases in the vmap layer
+ *
+ * The vmap/vmalloc layer lazily flushes kernel virtual mappings primarily
+ * to amortize TLB flushing overheads. What this means is that any page you
+ * have now, may, in a former life, have been mapped into kernel virtual
+ * address by the vmap layer and so there might be some CPUs with TLB entries
+ * still referencing that page (additional to the regular 1:1 kernel mapping).
+ *
+ * vm_unmap_aliases flushes all such lazy mappings. After it returns, we can
+ * be sure that none of the pages we have control over will have any aliases
+ * from the vmap layer.
+ */
+void vm_unmap_aliases(void)
{
+ unsigned long start = ULONG_MAX, end = 0;
+ bool flush = false;
int cpu;
if (unlikely(!vmap_initialized))
@@ -2245,6 +2260,9 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush)
might_sleep();
+ mutex_lock(&vmap_purge_lock);
+ purge_fragmented_blocks_allcpus();
+
for_each_possible_cpu(cpu) {
struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu);
struct vmap_block *vb;
@@ -2262,40 +2280,17 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush)
start = min(s, start);
end = max(e, end);
- flush = 1;
+ flush = true;
}
spin_unlock(&vb->lock);
}
rcu_read_unlock();
}
- mutex_lock(&vmap_purge_lock);
- purge_fragmented_blocks_allcpus();
if (!__purge_vmap_area_lazy(start, end) && flush)
flush_tlb_kernel_range(start, end);
mutex_unlock(&vmap_purge_lock);
}
-
-/**
- * vm_unmap_aliases - unmap outstanding lazy aliases in the vmap layer
- *
- * The vmap/vmalloc layer lazily flushes kernel virtual mappings primarily
- * to amortize TLB flushing overheads. What this means is that any page you
- * have now, may, in a former life, have been mapped into kernel virtual
- * address by the vmap layer and so there might be some CPUs with TLB entries
- * still referencing that page (additional to the regular 1:1 kernel mapping).
- *
- * vm_unmap_aliases flushes all such lazy mappings. After it returns, we can
- * be sure that none of the pages we have control over will have any aliases
- * from the vmap layer.
- */
-void vm_unmap_aliases(void)
-{
- unsigned long start = ULONG_MAX, end = 0;
- int flush = 0;
-
- _vm_unmap_aliases(start, end, flush);
-}
EXPORT_SYMBOL_GPL(vm_unmap_aliases);
/**
--
2.34.1
More information about the linux-arm-kernel
mailing list