[PATCH 1/2] mm/tlb: fix fullmm semantics

Will Deacon will at kernel.org
Thu Jan 4 06:40:13 PST 2024


On Thu, Jan 04, 2024 at 03:26:43PM +0200, Nadav Amit wrote:
> 
> 
> > On Jan 2, 2024, at 4:41 AM, Jisheng Zhang <jszhang at kernel.org> wrote:
> > 
> > On Sat, Dec 30, 2023 at 11:54:02AM +0200, Nadav Amit wrote:
> > 
> >> 
> >> My knowledge of arm64 is a bit limited, but the code does not seem
> >> to match the comment, so if it is correct (which I strongly doubt),
> >> the comment should be updated.
> > 
> > will do if the above change is accepted by arm64
> 
> Jisheng, I expected somebody with arm64 knowledge to point it out, and
> maybe I am wrong, but I really don’t understand something about the
> correctness, if you can please explain.
> 
> In the following code:
> 
> --- a/arch/arm64/include/asm/tlb.h
> +++ b/arch/arm64/include/asm/tlb.h
> @@ -62,7 +62,10 @@ static inline void tlb_flush(struct mmu_gather *tlb)
> 	 * invalidating the walk-cache, since the ASID allocator won't
> 	 * reallocate our ASID without invalidating the entire TLB.
> 	 */
> -	if (tlb->fullmm) {
> +	if (tlb->fullmm)
> +		return;
> 
> You skip flush if fullmm is on. But if page-tables are freed, you may
> want to flush immediately and not wait for ASID to be freed to avoid
> speculative page walks; these walks at least on x86 caused a mess.
> 
> No?

I think Catalin made the same observation here:

https://lore.kernel.org/r/ZZWh4c3ZUtadFqD1@arm.com

and it does indeed look broken.

Will



More information about the linux-arm-kernel mailing list