[PATCH 1/2] mm/tlb: fix fullmm semantics

Dave Hansen dave.hansen at intel.com
Wed Jan 3 12:26:29 PST 2024


On 1/3/24 10:05, Catalin Marinas wrote:
>> --- a/mm/mmu_gather.c
>> +++ b/mm/mmu_gather.c
>> @@ -384,7 +384,7 @@ void tlb_finish_mmu(struct mmu_gather *tlb)
>>  		 * On x86 non-fullmm doesn't yield significant difference
>>  		 * against fullmm.
>>  		 */
>> -		tlb->fullmm = 1;
>> +		tlb->need_flush_all = 1;
>>  		__tlb_reset_range(tlb);
>>  		tlb->freed_tables = 1;
>>  	}
> The optimisation here was added about a year later in commit
> 7a30df49f63a ("mm: mmu_gather: remove __tlb_reset_range() for force
> flush"). Do we still need to keep freed_tables = 1 here? I'd say only
> __tlb_reset_range().

I think the __tlb_reset_range() can be dangerous if it clears
->freed_tables.  On x86 at least, it might lead to skipping the TLB IPI
for CPUs that are in lazy TLB mode.  When those wake back up they might
start using the freed page tables.

Logically, this little hunk of code is just trying to turn the 'tlb'
from a ranged flush into a full one.  Ideally, just setting
'need_flush_all = 1' would be enough.

Is __tlb_reset_range() doing anything useful for other architectures?  I
think it's just destroying perfectly valid metadata on x86. ;)



More information about the linux-riscv mailing list