[PATCH 1/2] mm/tlb: fix fullmm semantics

Jisheng Zhang jszhang at kernel.org
Mon Jan 1 18:41:40 PST 2024


On Sat, Dec 30, 2023 at 11:54:02AM +0200, Nadav Amit wrote:
> 
> 
> > On Dec 28, 2023, at 10:46 AM, Jisheng Zhang <jszhang at kernel.org> wrote:
> > 
> > From: Nadav Amit <namit at vmware.com>
> > 
> > fullmm in mmu_gather is supposed to indicate that the mm is torn-down
> > (e.g., on process exit) and can therefore allow certain optimizations.
> > However, tlb_finish_mmu() sets fullmm, when in fact it want to say that
> > the TLB should be fully flushed.
> > 
> > Change tlb_finish_mmu() to set need_flush_all and check this flag in
> > tlb_flush_mmu_tlbonly() when deciding whether a flush is needed.
> > 
> > At the same time, bring the arm64 fullmm on process exit optimization back.
> > 
> > Signed-off-by: Nadav Amit <namit at vmware.com>
> > Signed-off-by: Jisheng Zhang <jszhang at kernel.org>
> > Cc: Andrea Arcangeli <aarcange at redhat.com>
> > Cc: Andrew Morton <akpm at linux-foundation.org>
> > Cc: Andy Lutomirski <luto at kernel.org>
> > Cc: Dave Hansen <dave.hansen at linux.intel.com>
> > Cc: Peter Zijlstra <peterz at infradead.org>
> > Cc: Thomas Gleixner <tglx at linutronix.de>
> > Cc: Will Deacon <will at kernel.org>
> > Cc: Yu Zhao <yuzhao at google.com>
> > Cc: Nick Piggin <npiggin at gmail.com>
> > Cc: x86 at kernel.org
> > ---
> > arch/arm64/include/asm/tlb.h | 5 ++++-
> > include/asm-generic/tlb.h    | 2 +-
> > mm/mmu_gather.c              | 2 +-
> > 3 files changed, 6 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
> > index 846c563689a8..6164c5f3b78f 100644
> > --- a/arch/arm64/include/asm/tlb.h
> > +++ b/arch/arm64/include/asm/tlb.h
> > @@ -62,7 +62,10 @@ static inline void tlb_flush(struct mmu_gather *tlb)
> > 	 * invalidating the walk-cache, since the ASID allocator won't
> > 	 * reallocate our ASID without invalidating the entire TLB.
> > 	 */
> > -	if (tlb->fullmm) {
> > +	if (tlb->fullmm)
> > +		return;
> > +
> > +	if (tlb->need_flush_all) {
> > 		if (!last_level)
> > 			flush_tlb_mm(tlb->mm);
> > 		return;
> > 
> 
> Thanks for pulling my patch out of the abyss, but the chunk above
> did not come from my old patch.

I stated this in cover letter msg ;) IMHO, current arm64 uses fullmm as
need_flush_all, so I think we need at least the need_flush_all line.

I'd like to see comments from arm64 experts.

> 
> My knowledge of arm64 is a bit limited, but the code does not seem
> to match the comment, so if it is correct (which I strongly doubt),
> the comment should be updated.

will do if the above change is accepted by arm64

> 
> [1] https://lore.kernel.org/all/20210131001132.3368247-2-namit@vmware.com/
> 
> 
> -- 
> This electronic communication and the information and any files transmitted 
> with it, or attached to it, are confidential and are intended solely for 
> the use of the individual or entity to whom it is addressed and may contain 
> information that is confidential, legally privileged, protected by privacy 
> laws, or otherwise restricted from disclosure to anyone else. If you are 
> not the intended recipient or the person responsible for delivering the 
> e-mail to the intended recipient, you are hereby notified that any use, 
> copying, distributing, dissemination, forwarding, printing, or copying of 
> this e-mail is strictly prohibited. If you received this e-mail in error, 
> please return the e-mail to the sender, delete it from your computer, and 
> destroy any printed copy of it.



More information about the linux-riscv mailing list