[PATCH 1/2] mm/tlb: fix fullmm semantics
Jisheng Zhang
jszhang at kernel.org
Thu Dec 28 00:46:41 PST 2023
From: Nadav Amit <namit at vmware.com>
fullmm in mmu_gather is supposed to indicate that the mm is torn-down
(e.g., on process exit) and can therefore allow certain optimizations.
However, tlb_finish_mmu() sets fullmm, when in fact it want to say that
the TLB should be fully flushed.
Change tlb_finish_mmu() to set need_flush_all and check this flag in
tlb_flush_mmu_tlbonly() when deciding whether a flush is needed.
At the same time, bring the arm64 fullmm on process exit optimization back.
Signed-off-by: Nadav Amit <namit at vmware.com>
Signed-off-by: Jisheng Zhang <jszhang at kernel.org>
Cc: Andrea Arcangeli <aarcange at redhat.com>
Cc: Andrew Morton <akpm at linux-foundation.org>
Cc: Andy Lutomirski <luto at kernel.org>
Cc: Dave Hansen <dave.hansen at linux.intel.com>
Cc: Peter Zijlstra <peterz at infradead.org>
Cc: Thomas Gleixner <tglx at linutronix.de>
Cc: Will Deacon <will at kernel.org>
Cc: Yu Zhao <yuzhao at google.com>
Cc: Nick Piggin <npiggin at gmail.com>
Cc: x86 at kernel.org
---
arch/arm64/include/asm/tlb.h | 5 ++++-
include/asm-generic/tlb.h | 2 +-
mm/mmu_gather.c | 2 +-
3 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
index 846c563689a8..6164c5f3b78f 100644
--- a/arch/arm64/include/asm/tlb.h
+++ b/arch/arm64/include/asm/tlb.h
@@ -62,7 +62,10 @@ static inline void tlb_flush(struct mmu_gather *tlb)
* invalidating the walk-cache, since the ASID allocator won't
* reallocate our ASID without invalidating the entire TLB.
*/
- if (tlb->fullmm) {
+ if (tlb->fullmm)
+ return;
+
+ if (tlb->need_flush_all) {
if (!last_level)
flush_tlb_mm(tlb->mm);
return;
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 129a3a759976..f2d46357bcbb 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -452,7 +452,7 @@ static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
* these bits.
*/
if (!(tlb->freed_tables || tlb->cleared_ptes || tlb->cleared_pmds ||
- tlb->cleared_puds || tlb->cleared_p4ds))
+ tlb->cleared_puds || tlb->cleared_p4ds || tlb->need_flush_all))
return;
tlb_flush(tlb);
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 4f559f4ddd21..79298bac3481 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -384,7 +384,7 @@ void tlb_finish_mmu(struct mmu_gather *tlb)
* On x86 non-fullmm doesn't yield significant difference
* against fullmm.
*/
- tlb->fullmm = 1;
+ tlb->need_flush_all = 1;
__tlb_reset_range(tlb);
tlb->freed_tables = 1;
}
--
2.40.0
More information about the linux-riscv
mailing list