[RFC PATCH 2/3] mm/vmalloc.c: Only flush VM_FLUSH_RESET_PERMS area immediately
Baoquan He
bhe at redhat.com
Fri May 19 05:02:10 PDT 2023
When freeing vmalloc range mapping, only unmapping the page tables is
done, TLB flush is lazily deferred to a late stage until
lazy_max_pages() is met or vmalloc() can't find available virtual memory
region.
However, to free VM_FLUSH_RESET_PERMS memory of vmalloc, TLB flushing
need be done immediately before freeing pages, and the direct map
needs resetting permissions and TLB flushing. Please see commit
868b104d7379 ("mm/vmalloc: Add flag for freeing of special permsissions")
for more details.
In the current code, when freeing VM_FLUSH_RESET_PERMS memory, lazy
purge is also done to try to save a TLB flush later. When doing that, it
merges the direct map range with the percpu vbq dirty range and all
purge ranges by calculating flush range of [min:max]. That will add the
huge gap between direct map range and vmalloc range into the final TLB
flush range. So here, only flush VM_FLUSH_RESET_PERMS area immediately,
and leave the lazy flush to the normal points, e.g when allocating
a new vmap_area, or lazy_max_pages() is met.
Signed-off-by: Baoquan He <bhe at redhat.com>
---
mm/vmalloc.c | 25 +++++++++++++++++++++----
1 file changed, 21 insertions(+), 4 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 31e8d9e93650..87134dd8abc3 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2690,9 +2690,10 @@ static inline void set_area_direct_map(const struct vm_struct *area,
*/
static void vm_reset_perms(struct vm_struct *area)
{
- unsigned long start = ULONG_MAX, end = 0;
+ unsigned long start = ULONG_MAX, end = 0, pages = 0;
unsigned int page_order = vm_area_page_order(area);
- int flush_dmap = 0;
+ struct list_head local_flush_list;
+ struct vmap_area alias_va, va;
int i;
/*
@@ -2708,17 +2709,33 @@ static void vm_reset_perms(struct vm_struct *area)
page_size = PAGE_SIZE << page_order;
start = min(addr, start);
end = max(addr + page_size, end);
- flush_dmap = 1;
}
}
+ va.va_start = (unsigned long)area->addr;
+ va.va_end = (unsigned long)(area->addr + area->size);
/*
* Set direct map to something invalid so that it won't be cached if
* there are any accesses after the TLB flush, then flush the TLB and
* reset the direct map permissions to the default.
*/
set_area_direct_map(area, set_direct_map_invalid_noflush);
- _vm_unmap_aliases(start, end, flush_dmap);
+ if (IS_ENABLED(CONFIG_HAVE_FLUSH_TLB_KERNEL_VAS)) {
+ if (end > start) {
+ pages = (end - start) >> PAGE_SHIFT;
+ alias_va.va_start = (unsigned long)start;
+ alias_va.va_end = (unsigned long)end;
+ list_add(&alias_va.list, &local_flush_list);
+ }
+
+ pages += area->size >> PAGE_SHIFT;
+ list_add(&va.list, &local_flush_list);
+
+ flush_tlb_kernel_vas(&local_flush_list, pages);
+ } else {
+ flush_tlb_kernel_range(start, end);
+ flush_tlb_kernel_range(va.va_start, va.va_end);
+ }
set_area_direct_map(area, set_direct_map_default_noflush);
}
--
2.34.1
More information about the linux-arm-kernel
mailing list