[PATCH v6 2/2] arm64: support batched/deferred tlb shootdown during page reclamation

Yicong Yang yangyicong at huawei.com
Tue Nov 15 17:50:58 PST 2022


On 2022/11/16 7:38, Nadav Amit wrote:
> On Nov 14, 2022, at 7:14 PM, Yicong Yang <yangyicong at huawei.com> wrote:
> 
>> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
>> index 8a497d902c16..5bd78ae55cd4 100644
>> --- a/arch/x86/include/asm/tlbflush.h
>> +++ b/arch/x86/include/asm/tlbflush.h
>> @@ -264,7 +264,8 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
>> }
>>
>> static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>> -					struct mm_struct *mm)
>> +					struct mm_struct *mm,
>> +					unsigned long uaddr)
> 
> Logic-wise it looks fine. I notice the “v6", and it should not be blocking,
> but I would note that the name "arch_tlbbatch_add_mm()” does not make much
> sense once the function also takes an address.
> 

ok the add_mm should still apply to x86 since the address is not used, but not for arm64.

> It could’ve been something like arch_set_tlb_ubc_flush_pending() but that’s
> too long. I’m not very good with naming, but the current name is not great.
> 

What about arch_tlbbatch_add_pending()? Considering the x86 is pending the flush operation
while arm64 is pending the sychronization operation, arch_tlbbatch_add_pending() should
make sense to both.

Thanks.




More information about the linux-riscv mailing list