[PATCH 1/2] mm/tlb: fix fullmm semantics
Nadav Amit
nadav.amit at broadcom.com
Thu Jan 4 05:26:43 PST 2024
> On Jan 2, 2024, at 4:41 AM, Jisheng Zhang <jszhang at kernel.org> wrote:
>
> On Sat, Dec 30, 2023 at 11:54:02AM +0200, Nadav Amit wrote:
>
>>
>> My knowledge of arm64 is a bit limited, but the code does not seem
>> to match the comment, so if it is correct (which I strongly doubt),
>> the comment should be updated.
>
> will do if the above change is accepted by arm64
Jisheng, I expected somebody with arm64 knowledge to point it out, and
maybe I am wrong, but I really don’t understand something about the
correctness, if you can please explain.
In the following code:
--- a/arch/arm64/include/asm/tlb.h
+++ b/arch/arm64/include/asm/tlb.h
@@ -62,7 +62,10 @@ static inline void tlb_flush(struct mmu_gather *tlb)
* invalidating the walk-cache, since the ASID allocator won't
* reallocate our ASID without invalidating the entire TLB.
*/
- if (tlb->fullmm) {
+ if (tlb->fullmm)
+ return;
You skip flush if fullmm is on. But if page-tables are freed, you may
want to flush immediately and not wait for ASID to be freed to avoid
speculative page walks; these walks at least on x86 caused a mess.
No?
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
More information about the linux-arm-kernel
mailing list